Skip to main content
Glama

Server Details

Polymarket + HIP-4 + Hyperliquid perps for Claude. 22 tools, signals & arb. Free tier.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
RavioleLabs/predmcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 23 of 23 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct, with minor overlap between get_signals and get_pm_hl_divergences, but descriptions differentiate them adequately. Most tools target specific data or signals with clear boundaries.

Naming Consistency5/5

All tool names follow a consistent verb_noun snake_case pattern (e.g., get_funding_rates, create_api_key), making it easy to understand a tool's action and target.

Tool Count4/5

23 tools is slightly above the typical range but justified by the broad domain covering multiple platforms and advanced analytical signals. Each tool serves a distinct purpose, earning its place.

Completeness4/5

The tool set is comprehensive for data retrieval and signal generation, covering market data, orderbooks, funding, liquidations, whales, and divergences. Minor gaps exist (e.g., no trading or user account tools), but these appear out of scope for a data API.

Available Tools

23 tools
create_api_keyCreate API KeyAInspect

Generate a free PredMCP API key instantly — no email required. Returns the key and ready-to-use MCP config. Call this first if you do not have a key yet. Free tier: 100 calls/day.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesYour email address — used to identify your key and for account recovery
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations have readOnlyHint=false and openWorldHint=false, but the description adds that it returns the key and MCP config. However, there is a contradiction: the description says 'no email required' while the input schema requires an email field. This inconsistency reduces transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with three sentences, each serving a purpose: what it does, when to use it, and the free tier limit. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the output (key and config) adequately despite no output schema. However, it does not mention error cases or duplicate keys, and the contradiction about email requirement slightly reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter that already has a description. The description adds that the email is used for identification and account recovery, which provides useful context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates a free PredMCP API key instantly. It uses the specific verb 'generate' and the resource 'API key', and it distinguishes itself from sibling tools which are all data-fetching operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this first if you do not have a key yet', providing clear guidance on when to use the tool. It also implies not to use it if you already have a key, and mentions the free tier limit, which helps with context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_funding_outliersGet Funding OutliersA
Read-only
Inspect

Hyperliquid perps whose current funding rate deviates significantly from their 7-day average. A spike vs baseline is a stronger signal than raw rate.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoHistorical window in days to compute the baseline average (default: 7)
min_deviation_factorNoMinimum ratio of |current_rate| / |avg_rate| to qualify as outlier (default: 2x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true, so the tool is safe and open-ended. The description adds the insight that spike vs baseline is a stronger signal, which is valuable beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load the purpose and key insight without any wasted words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the tool's purpose is simple and the description covers inputs and logic. Output format is implied (list of perps), and the description is sufficient given the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds interpretive context (spike vs baseline stronger) that goes beyond the schema's parameter descriptions, which already define days and min_deviation_factor.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds Hyperliquid perps with current funding rate deviating significantly from their 7-day average, which distinguishes it from siblings like get_funding_rates and get_top_funding_rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for outlier detection based on deviation, but does not explicitly state when to avoid or mention alternatives. However, context signals and sibling names provide implicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_funding_ratesGet Funding RatesA
Read-only
Inspect

Current funding rates for Hyperliquid perpetuals. Positive rate = longs pay shorts (bearish bias); negative = shorts pay longs (bullish bias).

ParametersJSON Schema
NameRequiredDescriptionDefault
coinsNoList of asset tickers to fetch, e.g. ["BTC", "ETH"]. Omit to fetch all available assets.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds interpretation (positive/negative rates), but does not disclose data freshness or rate limits. Still, it provides useful behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no unnecessary words. The first sentence states the purpose, the second adds interpretative context. Each sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description could mention the output format, but for a simple list of funding rates the semantic interpretation provided is sufficient. The tool is low-complexity with 1 parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description reinforces the optional 'coins' parameter with an example (e.g., ['BTC', 'ETH']), adding value beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides 'current funding rates for Hyperliquid perpetuals' and explains the meaning of positive/negative rates. This distinguishes it from sibling tools like get_funding_outliers and get_top_funding_rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by showing how to omit coins for all assets, but does not explicitly state when to use this tool versus alternatives like get_funding_outliers or get_top_funding_rates.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hip4_vs_pm_arbGet HIP-4 vs PM ArbA
Read-only
Inspect

Finds the same underlying market priced on both HIP-4 (on-chain Hyperliquid) and Polymarket, flagging spreads above threshold. A spread means one venue is mispriced relative to the other.

ParametersJSON Schema
NameRequiredDescriptionDefault
min_spread_pctNoMinimum spread between HIP-4 and Polymarket YES prices to flag (percentage points, default: 3)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint: true, openWorldHint: true) indicate a safe read operation and non-exhaustive results. The description adds that the tool flags spreads above a threshold, which is consistent. However, it does not disclose output format, pagination, or how the spread is computed beyond 'mispriced relative.' Adds value beyond annotations but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. The first sentence states the core action and second explains the concept. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple (1 optional parameter, read-only) and the description covers the key idea. However, without an output schema, mentioning the return structure (e.g., list of markets with spread percentages) would improve completeness. Otherwise adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the single parameter min_spread_pct is well-described with default, min, max, and meaning. The tool description reinforces its role as a threshold. No additional semantics needed; baseline 3 with coverage, and description adds context, justifying a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's purpose: finding the same underlying market on HIP-4 and Polymarket and flagging spreads above a threshold. It uses specific verb 'Finds' and specifies the resources (HIP-4, Polymarket), distinguishing it from sibling tools like get_pm_hl_divergences which compare Polymarket and Hyperliquid funding rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for detecting mispricing but does not explicitly state when to use this tool versus siblings like get_pm_hl_divergences or get_funding_outliers. No guidance on when not to use or how to interpret threshold. Provides clear context but lacks exclusions or alternative tool references.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hl_funding_pm_correlationGet HL Funding / PM CorrelationA
Read-only
Inspect

Pairs each Hyperliquid asset (with notable funding) with related Polymarket markets, showing whether funding direction and PM probability are aligned or divergent.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of correlated pairs to return (default: 15)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate a safe read operation (readOnlyHint) and variable results (openWorldHint). The description adds that it pairs assets with notable funding and shows alignment, which is useful context but does not disclose other behavioral traits like data freshness or pagination. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that conveys the core purpose efficiently without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single optional parameter and annotations, the description is mostly adequate but could be enhanced by clarifying the return format (e.g., whether alignment is a boolean or score). No output schema exists, so the description should provide more detail on output structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (the limit parameter is documented). The description does not mention parameters or add meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool pairs Hyperliquid assets with Polymarket markets and shows alignment/divergence of funding and probability. It uses specific verbs and resources, distinguishing it from sibling tools like get_pm_hl_divergences.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as get_pm_hl_divergences or get_funding_outliers. No when-not or usage context is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_late_game_sportsGet Late Game SportsA
Read-only
Inspect

Sports prediction markets on Polymarket closing within a few hours with a high-certainty leading outcome. Targets near-certain resolution for late-game positioning.

ParametersJSON Schema
NameRequiredDescriptionDefault
hours_maxNoMaximum hours until market closes (default: 6h)
certainty_pctNoMinimum leading outcome probability as percentage, e.g. 85 = 85% (default: 85)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint and openWorldHint. The description adds meaningful context about targeting near-certain resolution for late-game positioning, which goes beyond the annotations. However, it does not disclose rate limits or data freshness, which would be ideal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the action and purpose, with no extraneous words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite good annotations and parameter coverage, the description lacks any information about the output format. With no output schema, the agent is left uncertain about what the tool returns, which is a notable gap for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description does not add new meaning beyond what the schema already provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves sports prediction markets on Polymarket that are closing soon with a high-certainty outcome. This distinguishes it from siblings like get_markets_near_resolution or get_odds, as it specifically targets late-game sports markets with near-certain outcomes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for late-game positioning but does not explicitly compare with alternative tools like get_markets_near_resolution or get_signals. No when-to-use or when-not-to-use guidance is provided, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_liquidation_clustersGet Liquidation ClustersA
Read-only
Inspect

Estimated price levels where mass liquidations concentrate for a given Hyperliquid perp, computed from mark price and standard leverage multiples. Higher nearby orderbook liquidity = stronger support/resistance.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinYesAsset ticker to analyze, e.g. "BTC", "ETH", "SOL"
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint and openWorldHint. The description adds context about the estimation method and interpretation of results (stronger support/resistance with higher liquidity), which goes beyond annotations without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences deliver all critical information efficiently. The first sentence states purpose and computation, the second adds context. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-param tool with no output schema, the description provides sufficient context about what the output represents and its interpretation. However, it lacks explicit mention of output format or structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter 'coin' described as an asset ticker. The description adds minimal extra meaning beyond the schema, maintaining the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns 'estimated price levels where mass liquidations concentrate' for a Hyperliquid perp, specifying the computation method. It distinguishes itself from siblings as the only tool focused on liquidation clusters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing liquidation concentration levels for a perp, but does not explicitly state when to use or not use it, nor mention alternative tools for similar data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_contextGet Market ContextA
Read-only
Inspect

Unified intelligence snapshot for any topic, asset, or keyword: all matching Polymarket and HIP-4 prediction markets combined with live Hyperliquid perp data (price, funding, OI). One call replaces 3+ separate lookups.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesTopic, asset, or keyword to look up — e.g. "BTC", "Iran", "Fed rate cut", "Trump"
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and open-world nature; description adds that it combines three distinct data types and returns all matching results, with no contradiction to annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no fluff. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a snapshot tool with one parameter and no output schema, description adequately explains inputs and scope, though output format is not detailed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers query parameter with examples; description reinforces same meaning without adding new semantics, so baseline score for high coverage applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it provides a unified intelligence snapshot combining Polymarket, HIP-4 prediction markets, and Hyperliquid perp data for any query, distinguishing it from sibling tools that only cover individual data sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly notes it replaces 3+ separate lookups, implying use for broad overviews, but does not list specific sibling alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_marketsGet MarketsA
Read-only
Inspect

Live prediction markets from Polymarket and/or HIP-4, sorted by volume. Returns title, YES/NO prices, 24h volume, and expiry.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of markets to return (1–100, default: 20)
activeNoFilter to active/open markets only (default: true)
platformNoData source: "polymarket", "hip4", or "all" (default)all
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint, so the description's mention of live data and specific fields adds useful context without contradicting annotations. It does not disclose auth or rate limits, but these are not expected given tool nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that immediately conveys purpose, sources, sorting, and output fields. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately outlines return fields (title, prices, volume, expiry). It could mention sorting direction (implicitly descending) but is otherwise complete for a simple list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for all 3 parameters. The description adds 'sorted by volume' which is not in schema, but this is minor. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves live prediction markets from specific sources (Polymarket/HIP-4), sorted by volume, and lists the returned fields. This distinguishes it from siblings like search_markets or get_markets_near_resolution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for fetching current markets sorted by volume but does not explicitly state when to use it versus alternatives like search_markets or get_movers. No when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_markets_near_resolutionGet Markets Near ResolutionA
Read-only
Inspect

Polymarket markets resolving within the next N hours with a leading probability above threshold. Useful for resolution arbitrage and last-minute positioning.

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoMaximum hours until resolution (default: 24h, max: 168h = 7 days)
min_probNoMinimum leading outcome probability to include (default: 0.7 = 70%)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and open-world behavior. The description adds context about time and probability filtering, which is consistent. However, it does not explain the open-world aspect further or detail any side effects (none expected).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is front-loaded with the core function ('Polymarket markets resolving...') and includes usage context. Extremely concise with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately explains the filtering criteria but does not describe the output structure (e.g., which market fields are returned). Since there is no output schema, more detail would improve completeness. However, the tool's purpose is simple and likely consistent with other tools on the server.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with adequate descriptions for both parameters. The description adds no new parameter semantics beyond paraphrasing the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves Polymarket markets resolving within a time window with high probability, and explicitly mentions use cases (resolution arbitrage, last-minute positioning), effectively distinguishing it from sibling tools like get_markets or get_late_game_sports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description provides context of when to use the tool ('resolution arbitrage and last-minute positioning') but does not explicitly state when not to use it or point to alternative tools, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_moversGet MoversA
Read-only
Inspect

Top prediction markets ranked by 24h volume spike or biggest YES/NO price swing. Surfaces breaking news bets and momentum plays across Polymarket and HIP-4.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of top movers to return (1–20, default: 10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint and openWorldHint, and the description adds context about covering Polymarket and HIP-4 markets. However, it does not disclose data freshness, pagination details, or whether results are based on a specific time window.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and scope without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with one parameter and no output schema, the description adequately explains what is returned (top markets ranked by metrics). It could mention typical fields in the output but is sufficient overall.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the single parameter 'limit' is fully described in the schema. The description adds no additional semantic meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the tool retrieves top prediction markets ranked by 24h volume spike or YES/NO price swing, clearly distinguishing it from sibling tools like get_volume_spikes or get_odds.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for finding breaking news bets and momentum plays, but does not explicitly state when to use this tool over alternatives like get_markets or get_signals. No when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_oddsGet OddsA
Read-only
Inspect

Current YES/NO prices and implied probability for any Polymarket or HIP-4 market token.

ParametersJSON Schema
NameRequiredDescriptionDefault
platformYesPlatform the market is on: "polymarket" or "hip4"
identifierYesFor Polymarket: the token_id of the YES or NO outcome. For HIP-4: the base asset ticker (e.g. "BTC")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and openWorldHint=true, so the description's claim of retrieving 'current' prices adds minimal behavioral context. No mention of rate limits, data freshness, or other traits beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence with no wasted words. Clearly communicates the tool's function without unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity and two parameters, the description adequately conveys what it returns (prices and probability). No output schema exists, but the description gives sufficient context for an agent to understand the result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for its two parameters (platform and identifier). The description repeats the platform options but does not add meaning beyond the schema's own descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves current YES/NO prices and implied probability for Polymarket or HIP-4 market tokens. It uses a specific verb (get) and resource (odds/prices), distinguishing it from sibling tools that focus on funding rates, liquidation clusters, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives (e.g., get_markets, get_orderbook). While the purpose is clear, the description does not mention exclusions or contexts where other tools would be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_oi_near_capGet OI Near CapA
Read-only
Inspect

Lists Hyperliquid perps that are currently at the open interest cap — new long positions cannot be opened. Use as a blacklist to avoid getting rejected on entry.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint and openWorldHint. Description adds value by explaining the business logic (new long positions cannot be opened), going beyond the structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states functionality, second gives explicit usage guidance. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no parameters, description provides sufficient context for a read-only list tool. Could hint at output format but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. Baseline score of 4 is appropriate as description does not need to compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool lists Hyperliquid perps at open interest cap, with a specific verb 'Lists' and defined resource. It differentiates from siblings like get_open_interest by focusing on cap status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use as a blacklist to avoid getting rejected on entry,' providing clear when-to-use context. Does not name alternatives but implies difference from general OI tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_open_interestGet Open InterestB
Read-only
Inspect

Total open interest in USD and contracts for Hyperliquid perpetuals. Rising OI + rising price = strong trend; rising OI + falling price = short build-up.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinsNoList of asset tickers to fetch, e.g. ["BTC", "SOL"]. Omit to fetch all available assets.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds context about OI-price relationships but does not elaborate on behavior beyond what annotations imply. There is no contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, with the first stating the core purpose and the second offering interpretive context. It is concise and front-loaded, though the interpretive sentence is not strictly necessary for tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, yet the description does not describe the return format or structure. For a simple data-fetch tool, some indication of whether results are per asset or aggregated would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for the 'coins' parameter, so the schema itself documents meaning. The description does not add new details about the parameter beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool retrieves total open interest in USD and contracts for Hyperliquid perpetuals. It provides a specific verb-resource pairing ('get open interest') and is distinct from siblings like get_oi_near_cap, though it does not explicitly differentiate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives. The description does not mention when not to use it or point to sibling tools like get_oi_near_cap for related functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_orderbookGet OrderbookA
Read-only
Inspect

Full orderbook depth (bids + asks) for any Polymarket market token. Shows liquidity at each price level.

ParametersJSON Schema
NameRequiredDescriptionDefault
token_idYesPolymarket token ID for the YES or NO side of a market
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint. Description adds behavioral detail: 'Shows liquidity at each price level', which is consistent and non-contradictory.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with key information, no wasted words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (1 param, no output schema, simple read operation), the description fully specifies the tool's purpose and behavior. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of the single parameter with a clear description. Description adds no further parameter detail beyond the schema, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns full orderbook depth (bids and asks) for Polymarket market tokens, specifying 'liquidity at each price level'. This distinguishes it from sibling tools like get_markets or get_odds.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives (e.g., get_markets, get_odds). The description implies it's for orderbook data but doesn't exclude or compare.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pm_hl_divergencesGet PM/HL DivergencesA
Read-only
Inspect

Markets where Polymarket implied probability diverges from Hyperliquid perpetual funding direction — e.g. PM prices bullish outcome but HL funding shows crowded longs (bearish pressure). The hardest signal to compute manually.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of divergences to return (default: 15)
min_pctNoMinimum divergence percentage between PM implied probability and HL pricing to flag (default: 10%)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint and openWorldHint. The description adds behavioral context by explaining the type of divergence (e.g., PM bullish vs HL crowded longs) and noting the computational difficulty. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences that efficiently state the main purpose, provide an illustrative example, and note the tool's value. No redundancy or unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the concept well but does not specify output structure (no output schema). Given the clear concept and limit parameter, it is sufficiently complete for an agent to understand the tool's role.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers both parameters with good descriptions (100% coverage). The tool description adds no additional parameter meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's purpose: finding markets where Polymarket implied probability diverges from Hyperliquid perpetual funding direction. It provides a concrete example and distinguishes its specific signal from sibling tools like get_funding_outliers or get_hl_funding_pm_correlation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when seeking divergences that are hard to compute manually, but does not explicitly state when not to use or name alternatives. The sibling list makes differentiation possible.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_signalsGet SignalsA
Read-only
Inspect

Detect divergence signals between Hyperliquid perpetual funding/OI sentiment and HIP-4 on-chain prediction market odds. Returns BULLISH/BEARISH/DIVERGENCE signal with reasoning — e.g. perps long-biased while prediction market prices a decline.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinYesTicker of the asset to analyze, e.g. "BTC", "ETH", "SOL"
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint and openWorldHint. The description adds concrete output types (BULLISH/BEARISH/DIVERGENCE) and mentions reasoning, which is helpful behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Key information is front-loaded: purpose then output format. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should fully describe return format. It mentions signal type and reasoning but lacks structure (e.g., JSON fields). For a simple tool, it is adequate but not detailed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the 'coin' parameter. The description adds example tickers but no new semantic detail, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool detects divergence signals between two specific data sources (Hyperliquid perpetual funding/OI sentiment and HIP-4 prediction market odds) and specifies output types (BULLISH/BEARISH/DIVERGENCE). This distinguishes it from siblings like get_pm_hl_divergences by naming both sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what the tool returns but does not explicitly state when to use it versus alternative tools (e.g., get_pm_hl_divergences). The example provides context, but no direct guidance on selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_top_funding_ratesGet Top Funding RatesA
Read-only
Inspect

Top Hyperliquid perps ranked by absolute funding rate, with OI and annualized yield. Useful for finding the most overcrowded longs/shorts and carry opportunities.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of top results to return (default: 10)
min_abs_rateNoMinimum absolute funding rate to include, e.g. 0.0001. Omit to include all.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations mark as readOnly and openWorld, so safety is clear. Description adds that results include OI and annualized yield, and are ranked by absolute funding rate, which is useful behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler. Front-loaded with the key action and output, then a concise usage cue. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the description mentions the key fields (OI, annualized yield) and ranking, which is sufficient for a simple list-of-items tool. Could be more explicit about the structure, but adequate given low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers both parameters with descriptions, achieving 100% coverage. The description does not add any additional meaning beyond what is already in the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns top Hyperliquid perps ranked by absolute funding rate, with OI and annualized yield. It distinguishes from sibling tools like get_funding_rates by specifying 'top' and the ranking criterion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States usefulness for finding overcrowded longs/shorts and carry opportunities, providing clear use context. Does not explicitly exclude alternatives, but the specificity is adequate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_volume_spikesGet Volume SpikesA
Read-only
Inspect

Polymarket markets with abnormal 24h volume vs their 7-day daily average. Volume spikes typically precede news events or informed positioning.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default: 15)
min_ratioNoMinimum ratio of 24h volume vs 7-day daily average to qualify as a spike (default: 3x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and openWorldHint=true, which the description complements by explaining the rationale behind volume spikes (e.g., preceding news). It adds behavioral context beyond annotations, though it does not detail the return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The first sentence states the function, the second adds significance. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 2 parameters, annotations, and no output schema, the description is sufficiently clear about purpose and usage context. It could mention the return structure, but it is not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with clear explanations for 'limit' and 'min_ratio'. The description adds no additional parameter details, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'get' and resource 'volume spikes', clearly stating it returns Polymarket markets with abnormal 24h volume vs 7-day average. This distinguishes it from sibling tools like get_movers or get_markets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides context that volume spikes typically precede news events or informed positioning, indicating when to use the tool. However, it does not explicitly mention when not to use it or name alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whale_convergenceGet Whale ConvergenceA
Read-only
Inspect

Detect simultaneous whale activity on both Hyperliquid perps and Polymarket for an asset. Flags convergence events where large perp trades and large prediction market positions align — a leading indicator of informed positioning.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinYesTicker of the asset to analyze, e.g. "BTC", "ETH"
window_minutesNoLookback window in minutes for whale trade detection (1–60, default: 15)
min_notional_usdcNoMinimum trade size in USDC to qualify as whale activity (default: 100,000)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true and openWorldHint=true. The description adds minimal behavioral context beyond the purpose, such as data freshness or interpretation caveats. It doesn't disclose rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences front-load the purpose, define convergence, and note it's a leading indicator. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers what the tool does but lacks output format details (e.g., list of events, timestamps). Given no output schema, this is a gap. However, it adequately describes the multi-exchange scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all 3 parameters. The description does not add additional meaning or syntax details beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool detects simultaneous whale activity on both Hyperliquid perps and Polymarket, defining convergence events. It distinguishes from siblings like get_whale_trades by focusing on cross-exchange alignment for informed positioning.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for finding cross-market signals but lacks explicit guidance on when to choose this over siblings (e.g., get_whale_trades, get_pm_hl_divergences). No when-not-to-use or alternatives listed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whale_positionsGet Whale PositionsB
Read-only
Inspect

Largest current position holders in a Polymarket prediction market. Shows wallet address, position size in USDC, and side (YES/NO).

ParametersJSON Schema
NameRequiredDescriptionDefault
condition_idYesPolymarket condition ID for the market to inspect
min_size_usdcNoMinimum position size in USDC to include in results (default: 1,000)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the description's mention of showing wallet addresses and sizes is consistent. However, it adds only minimal behavioral context beyond annotations (e.g., no mention of data freshness, pagination, or rate limits).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words, with the purpose stated first. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description mentions the main output fields (wallet, size, side) but omits details like number of results, ordering, or potential limits. Adequate but incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers both parameters with descriptions, achieving 100% coverage. The tool description does not add extra parameter-level detail beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the largest position holders in a Polymarket market, specifying the output fields. It distinguishes from siblings like 'get_whale_trades' by focusing on positions, but the verb 'get' is generic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, no prerequisites, and no exclusions. The description only explains what it does, not when to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whale_tradesGet Whale TradesA
Read-only
Inspect

Recent large trades on Hyperliquid perps above a notional threshold. Includes side (long/short), size, price, and timestamp.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinYesAsset ticker to fetch whale trades for, e.g. "BTC", "ETH"
min_notional_usdcNoMinimum trade size in USDC to qualify as a whale trade (default: 50,000)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true. Description adds that it returns recent trades with side/size/price/timestamp, consistent with read-only behavior. No additional behavioral insight like rate limits or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and output details. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately describes purpose, input, and output for a simple retrieval tool. Slightly lacking in data freshness or pagination details, but acceptable given simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters (coin, min_notional_usdc) with descriptions. The description mentions 'above a notional threshold' but adds no new semantics beyond schema defaults and examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves recent large trades on Hyperliquid perps with a notional threshold, listing output fields (side, size, price, timestamp). Distinguishes from siblings like get_whale_positions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for fetching recent large trades for a coin but lacks explicit guidance on when to use vs. alternatives (e.g., get_whale_convergence) or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_marketsSearch MarketsA
Read-only
Inspect

Full-text search across all Polymarket and HIP-4 prediction markets. Returns ranked results with current odds.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return (1–50, default: 10)
queryYesKeywords to search in market names and descriptions, e.g. "bitcoin ETF", "US election", "Fed pivot"
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only and open-world hints. The description adds that results are ranked with current odds, providing behavioral context beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy. First sentence captures core purpose, second adds result characteristics.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 2 fully described parameters and no output schema, the description adequately explains the search scope and result type. Could mention that it searches names and descriptions, but schema covers that.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already describes both parameters with 100% coverage. The description adds example queries, enhancing meaning slightly but not substantially.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs full-text search across Polymarket and HIP-4 prediction markets and returns ranked results with current odds, distinguishing it from sibling tools like 'get_markets'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies use for full-text search, but does not mention exclusions or alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.