tradingcalc
Server Details
Deterministic crypto futures calculations for AI agents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- SKalinin909/tradingcalc-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
19 toolsprimitive.average_entryBInspect
Calculate the weighted average entry price from multiple buy/sell fills (DCA). Use when user has filled at multiple prices and asks "what's my average entry?" Returns: averagePrice, totalSize, totalCost.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | ||
| symbol | Yes | Trading pair symbol, e.g. BTCUSDT | |
| exchangeCode | No | Exchange identifier (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It specifies the calculation method ('weighted'), implying quantity acts as the weight, but omits critical behavioral details: whether buys and sells negate each other, handling of zero-quantity fills, side effects (likely none for a 'primitive'), or return value format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, densely informative sentence of 9 words. It leads with the verb ('Calculate'), wastes no words, and immediately conveys the tool's function without redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description should disclose what the tool returns (the calculated price value). While it adequately describes the input domain (trading fills), it omits output format, precision handling, and validation rules (e.g., negative quantities for sells), leaving gaps for a financial calculation primitive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
At 67% schema coverage, the description adds valuable semantic context beyond the schema. The phrase 'weighted average' explains the relationship between 'price' and 'quantity' parameters (weights), and 'fills' contextualizes the domain (trade execution data), which the raw schema entries ('Fill price', 'Fill quantity') do not convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Calculate') and resource ('weighted average entry price') with clear domain context ('buy/sell fills'). However, it does not explicitly distinguish from sibling 'primitive.hedge_ratio' or explain why to use this specific primitive versus others.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the phrase 'from multiple buy/sell fills' implies usage context (aggregate position calculations), there is no explicit guidance on when to use this versus alternatives, preconditions for the fills data, or when this calculation is appropriate versus other workflow tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
primitive.hedge_ratioBInspect
Calculate the short perpetual futures position size needed to hedge a spot holding. Use when user asks "how much should I short to hedge my BTC?" or "what margin do I need for a 100% hedge?". Returns: hedgeNotional, requiredMargin, estimatedFundingCost.
| Name | Required | Description | Default |
|---|---|---|---|
| leverage | No | Leverage on the perp short. Default 1. | |
| spotSize | Yes | Spot position value in USDT | |
| hedgeRatio | No | Percentage of spot to hedge, e.g. 100 for full hedge, 50 for half. Default 100. | |
| fundingRatePct | No | Current 8h funding rate as percentage, e.g. 0.01. Used for cost estimate. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Compensates by disclosing return values (notional, margin, funding cost) which substitutes for missing output schema. However, fails to clarify if this performs account queries (reads) or is pure math, and omits safety profile (read-only vs trading operation).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states operation, second states returns. Zero redundancy, appropriate length, information-dense with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong compensation for missing output schema by enumerating return values. Complete parameter documentation via schema. Lacks operational details (error conditions, rate limits, account requirements) which annotations would ideally cover.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for all 4 parameters (spotSize, leverage, hedgeRatio, fundingRatePct). Description provides conceptual context (spot holding, hedging) but adds no semantic detail beyond schema baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (calculate) and specific resource (short perpetual futures position size for hedging). Explains the financial operation distinctly. Lacks explicit differentiation from sibling primitives like average_entry or workflow tools, though the naming convention helps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains functional description implying use case (hedging spot holdings), but provides no explicit guidance on when to select this over alternatives like workflow.run_position_sizing or primitive.average_entry, and no prerequisites or constraints mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
system.verifyAInspect
Run the full regression suite — 22 canonical test vectors across all 12 calculators — and return a pass/fail report with counts and timestamp. Call this before using results in production workflows to confirm the computation layer is operating correctly.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds useful scope ('all 12 calculators') and return format ('pass/fail report'), but omits safety profile (destructive vs safe), execution duration, or side effects that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence front-loads action and output; second sentence provides usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete for a zero-parameter tool. Compensates for lack of output schema by describing return value ('pass/fail report'). Mentions scope (12 calculators) providing necessary context. Could benefit from noting execution cost or duration, but sufficient for complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters. Per scoring rules, 0 params = baseline 4. Description appropriately requires no parameter explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Run' with clear resource 'test-vector suite against all 12 calculators' and specifies output 'pass/fail report'. Distinguishes from calculation siblings by positioning as verification/validation tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use this to confirm that calculator results are correct before trusting them.' Provides clear context for verification workflow. Lacks explicit 'when not to use' or named alternatives, preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_breakeven_planningCInspect
Calculate the break-even exit price that covers all trading fees. Use when user asks "what price do I need to just break even?" Returns: breakevenPrice, totalFees.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | ||
| sizeBase | Yes | Position size in base asset | |
| entryPrice | Yes | Entry price (positive) | |
| feeOpenPct | No | Opening fee fraction, default 0.0002 | |
| feeClosePct | No | Closing fee fraction, default 0.0005 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Calculate' implies a read-only operation, the description fails to specify the return format (scalar price vs. object), whether the calculation includes slippage, or any precision/constraints on the output.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at 9 words. Front-loaded with the action verb and specific resource. No redundant or wasted language; every word contributes to understanding the tool's singular purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite high schema coverage, the absence of annotations and output schema creates significant gaps. For a calculation tool, the description should ideally specify the return value format and units, especially since no output_schema exists to document this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 80%, establishing a baseline of 3. The description mentions 'trading fees' which provides semantic context for the feeOpenPct and feeClosePct parameters, and implies the need for entry pricing, but does not explicitly clarify the 'side' or 'sizeBase' parameters or their relationship to the calculation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Calculate') and clearly identifies the output ('break-even exit price') and scope ('covers all trading fees'). However, it lacks explicit differentiation from siblings like 'workflow.run_exit_target' or 'workflow.run_pnl_planning' that also deal with exit calculations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to select this tool versus alternatives (e.g., when to use breakeven calculation vs. exit target planning). No mention of prerequisites or conditions where this tool would be unsuitable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_carry_tradeBInspect
Delta-neutral carry trade (funding arbitrage) analysis. Use when user asks "is this carry trade worth it?" — long on exchange A, short on exchange B, collect the funding rate spread. Returns: netYieldPct, grossProfit, netProfit, breakevenDays, verdict (profitable/marginal/loss).
| Name | Required | Description | Default |
|---|---|---|---|
| notional | Yes | Position notional in USDT | |
| hold_days | Yes | Hold duration in days | |
| interval_hours | No | Funding interval: 1 or 8 hours (default 8) | |
| transfer_fee_pct | No | One-way transfer fee % (default 0.1) | |
| funding_rate_long | Yes | Funding rate on long exchange per interval (decimal) | |
| funding_rate_short | Yes | Funding rate on short exchange per interval (decimal) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clarifies that the tool performs 'analysis' and returns specific financial metrics plus a verdict, but it does not explicitly state whether this is a simulation-only tool or if it executes actual trades, nor does it mention error handling or data sources.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficiently structured sentences with zero wasted words, front-loading the strategy type and following with specific metrics and return values.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% input schema coverage and absence of an output schema, the description adequately compensates by enumerating the calculated outputs (net yield, ROI, verdict, etc.). It appropriately covers the tool's purpose for a calculation utility, though it could clarify the return data structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'transfer cost' and 'breakeven days' which loosely map to parameters, but does not add syntax details, validation constraints, or semantic context beyond what the schema already provides for each parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as performing 'Delta-neutral carry trade analysis' with specific outputs (net yield, gross/net profit, transfer cost, ROI, breakeven days). However, it does not explicitly differentiate this tool from the sibling 'workflow.run_funding_arbitrage', which likely involves similar funding rate calculations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to select this tool versus alternatives like 'run_funding_arbitrage' or 'run_funding_cost', nor does it mention prerequisites or conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_compound_fundingAInspect
Project capital growth from reinvesting perpetual futures funding income (compounding carry). Use when user asks "how much will I make compounding 0.01% funding for 90 days?" or "what's my APY on this carry position?". Returns: finalCapital, totalEarned, apy, growthTable.
| Name | Required | Description | Default |
|---|---|---|---|
| reinvestPct | No | Percentage of earnings reinvested each interval. 100 = full compounding, 0 = no reinvestment. Default 100. | |
| durationDays | Yes | Number of days to project | |
| intervalHours | No | Funding interval: 8 (standard) or 1 (Hyperliquid) | |
| fundingRatePct | Yes | Funding rate per interval as percentage, e.g. 0.01 for 0.01% | |
| initialCapital | Yes | Starting capital in USDT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure given no annotations: 'Project' clearly signals this is a simulation/calculation tool (read-only, no execution). Also documents return values (final capital, APY, snapshot table) which compensates for missing output schema. Missing: error conditions, rate limiting, or caveats about calculation assumptions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First states purpose, second documents returns. Every element earns its place. Appropriate length for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a projection tool: describes intended function, documents outputs (compensating for no output schema), and leverages good schema coverage. Minor gap: lacks explicit confirmation of simulation-only behavior (though 'Project' implies this) and missing sibling contrast for the crowded workflow domain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds domain context (perpetual futures, reinvestment strategy) that semantically frames the parameter set as a cohesive calculation, but does not elaborate on individual parameter syntax or relationships beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent: 'Project capital growth from reinvesting perpetual futures funding income' provides a specific verb (project), clear resource (perpetual futures funding income), and distinguishes from siblings like run_funding_cost or run_funding_arbitrage through the 'reinvesting' and 'compound' aspect implied by the name and description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage through specificity (mentions 'reinvesting' which signals compounding use cases), but lacks explicit when-to-use guidance or differentiation statements like 'Use this instead of run_funding_cost when modeling reinvestment effects.' No alternatives or exclusions named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_dca_entryCInspect
DCA entry planner: weighted average entry price, breakeven, and per-level contribution from multiple fill prices and sizes. Use when user bought at several prices and asks "what's my average entry?" or "where is my DCA breakeven?". Returns: averageEntry, breakeven, per-level summary.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | ||
| entries | Yes | ||
| fee_open_pct | No | Open fee rate (default 0.0002) | |
| fee_close_pct | No | Close fee rate (default 0.0005) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears full responsibility for disclosing behavioral traits. It mentions what gets calculated (weighted average, breakeven) but fails to disclose whether the tool is read-only (likely), what format the response takes, or any computational limitations. It does not mention that fees are factored into the breakeven calculation despite fee parameters existing in the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense sentence with no filler text. The colon structure effectively separates the tool type from its outputs. While efficient, it borders on cryptic—it could benefit from slightly more elaboration to clarify the relationship between inputs (entries array) and outputs for better scannability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters including a nested array structure (entries with price/size objects), no annotations, and no output schema, the description is insufficient. It omits constraints like the 2-20 entry level limit, does not describe the return value format (critical for a calculation tool), and fails to contextualize the fee parameters' role in the calculation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (fee_open_pct and fee_close_pct are documented in schema, side and entries are not). The description adds semantic value by referencing 'multiple fill prices and sizes', which clarifies the purpose of the entries array parameter. However, it does not explain the 'side' parameter (long/short) or acknowledge the fee parameters, leaving gaps that the schema only partially covers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as a DCA (Dollar Cost Averaging) calculator that computes weighted average entry price, breakeven levels, and per-level contributions. It uses specific financial terminology that accurately describes the domain. However, it fails to differentiate from the similar sibling tool 'primitive.average_entry' or explain why to use this workflow version over the primitive.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'primitive.average_entry' or 'workflow.run_breakeven_planning'. It does not mention prerequisites, such as needing at least 2 entry levels (per schema constraints), nor does it clarify the trading context (long vs short) implications.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_exit_targetCInspect
Calculate the exact exit price needed to hit a target PnL or ROE percentage. Use when user asks "at what price do I take profit to make $500?" or "where should I set TP for 20% ROE?". Returns: targetExitPrice.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | ||
| leverage | Yes | Leverage multiplier | |
| sizeBase | Yes | Position size in base asset | |
| entryPrice | Yes | Entry price | |
| feeOpenPct | No | Opening fee fraction, default 0.0002 | |
| targetMode | Yes | "pnl" = target in USDT, "roe" = target in % | |
| feeClosePct | No | Closing fee fraction, default 0.0005 | |
| targetValue | Yes | Target value (USDT or %) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. While 'Calculate' suggests a read-only operation, the description fails to confirm this, describe the output format (critical since no output schema exists), or mention error conditions (e.g., impossible targets given leverage constraints).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single 12-word sentence delivers maximal information density. The sentence is front-loaded with the action verb and every word contributes to the functional definition without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex financial calculation tool with 8 parameters (including fee fractions) and no output schema, the description omits critical context: it does not mention that fees are factored into the calculation, does not describe the return value (presumably a price level), and provides no guidance on valid input ranges or constraint violations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 88% (high), establishing a baseline of 3. The description mentions 'PnL or ROE' which contextualizes targetMode/targetValue, but adds no semantic detail for feeOpenPct/feeClosePct despite their importance to the calculation, nor clarifies the 'sizeBase' unit expectations beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific calculation performed (exit price for target PnL/ROE) using a precise verb. It implicitly distinguishes from siblings like run_breakeven_planning by focusing on profit targets rather than breakeven, though it lacks explicit contrast with workflow.run_pnl_planning.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to select this tool versus alternatives like run_pnl_planning or run_scenario_planning. It states what the tool does but offers no decision criteria for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_funding_arbitrageAInspect
Calculate funding rate arbitrage profit: annualized yield, net profit, and breakeven days for a long/short basis trade across two exchanges. Use when user asks "is this funding arb worth it?" or "how many days to break even on transfer fees?". Returns: netProfitUsdt, annualizedYieldPct, breakevenDays.
| Name | Required | Description | Default |
|---|---|---|---|
| durationDays | Yes | Holding period in days | |
| positionSize | Yes | Position size in USDT | |
| intervalHours | No | Funding interval: 8 (standard) or 1 (Hyperliquid) | |
| transferFeePct | No | One-time transfer/setup fee as percentage, e.g. 0.1 for 0.1% | |
| longFundingRate | Yes | Funding rate on long side (% per interval, positive = you pay) | |
| shortFundingRate | Yes | Funding rate on short side (% per interval, positive = you receive) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable output context (annualized yield, net profit, breakeven days), but missing critical safety disclosure: does not clarify whether this performs calculation-only or actually executes/managed trades—a crucial distinction for financial workflows that the agent needs to know.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single 23-word sentence with zero waste. Front-loaded verb ('Calculate'), immediately followed by outputs and domain context. Every clause earns its place—no redundancy with schema or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong coverage given constraints: compensates for missing output schema by explicitly listing return values (annualized yield, net profit, breakeven days). Missing one element: explicit confirmation that this is a calculator (non-destructive) rather than trade executor, which would complete safety context given the 6-parameter complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds valuable semantic context linking parameters: 'long/short basis trade across two exchanges' explains the relationship between longFundingRate and shortFundingRate (different exchanges) and implies intervalHours governs both legs. Elevates above pure schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent: specific verb 'Calculate' + resource 'funding rate arbitrage profit' + scope 'long/short basis trade across two exchanges'. Clearly distinguishes from siblings like 'run_funding_cost' (single-side cost) and 'run_breakeven_planning' (breakeven analysis) by specifying the arbitrage domain and dual-exchange context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage through specificity ('funding rate arbitrage' signals arbitrage opportunity analysis), but lacks explicit when-to-use guidance vs. alternatives like 'run_funding_cost' or 'run_compound_funding'. No explicit exclusions or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_funding_breakevenAInspect
Price move needed to cover funding cost + fees over a holding period. Use when user asks "how much does BTC need to move for me to profit after funding?" or "is funding killing my edge on this trade?". Returns: breakevenWithFunding, breakevenWithoutFunding, requiredMovePct.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | ||
| size | Yes | Position size in base currency | |
| hold_hours | Yes | Hold duration in hours | |
| entry_price | Yes | Entry price | |
| fee_open_pct | No | Open fee rate (default 0.0002) | |
| funding_rate | Yes | Funding rate per 8h period (decimal, e.g. 0.0001) | |
| fee_close_pct | No | Close fee rate (default 0.0005) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and effectively describes the return structure: 'Returns breakeven with and without funding, and required move as % of entry.' This clarifies the tool produces multiple calculated values (breakeven prices and percentage move), which is crucial given the lack of output schema. It does not disclose calculation methodology (e.g., funding compounding assumptions).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two tightly constructed sentences with zero waste. The first sentence establishes the core calculation purpose; the second specifies the return values. Every phrase earns its place with no redundancy or generic filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high input schema coverage (86%) and the description's disclosure of return values (compensating for no output schema), the description is adequate. However, for a financial calculation tool with no annotations, it lacks methodological context (e.g., fee calculation logic, funding intervals) and usage prerequisites that would make it fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 86% schema description coverage, the baseline is 3. The description adds semantic value by grouping parameters into business concepts ('holding period,' 'funding cost,' 'fees') but does not elaborate on specific formats, constraints, or relationships beyond what the schema already provides (e.g., it does not explain the per-8h funding rate logic mentioned in the schema).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool calculates the 'Price move needed to cover funding cost + fees over a holding period,' specifying the exact financial metric (breakeven price move) and cost components (funding + fees). It implicitly distinguishes from siblings like run_funding_cost (which calculates cost only) and run_breakeven_planning (general vs funding-specific).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like run_funding_cost, run_breakeven_planning, or run_pre_trade_check. While the specificity of 'funding cost + fees' implies usage context, there are no 'when to use' or 'when not to use' statements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_funding_costBInspect
Calculate the total funding cost (or income) for holding a perpetual futures position. Use when user asks "how much funding will I pay holding X days?" or "is funding eating my profit?". Returns: totalFundingUsdt (negative = you pay, positive = you receive), perIntervalUsdt.
| Name | Required | Description | Default |
|---|---|---|---|
| days | Yes | Number of days to hold | |
| side | Yes | ||
| sizeBase | Yes | Position size in base asset | |
| entryPrice | Yes | Entry price | |
| fundingRate | Yes | Funding rate per 8h period as fraction, e.g. 0.0001 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Acknowledges bidirectional nature of funding ('cost or income') which is critical behavioral context for perpetual futures. However, lacks disclosure on output format, currency units (quote vs base), or calculation methodology (simple vs compound) given no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 14 words. Front-loaded with action verb. No redundancy or filler. Appropriate density for the complexity level.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description should specify the return value format/units (e.g., 'returns total cost in quote currency'). Currently implies output through 'Calculate' but leaves ambiguity on currency denomination and granularity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 80% (4/5 params documented; 'side' lacks description). Description adds domain context ('perpetual futures position') but does not clarify parameter relationships (e.g., how 'days' interacts with 'fundingRate' which is per 8h period) or compensate for the missing 'side' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Calculate') and resource ('total funding cost or income' for perpetual futures). Scopes to perpetual futures specifically, which inherently distinguishes from siblings like run_breakeven_planning or run_funding_arbitrage. However, does not explicitly clarify when to use this versus run_compound_funding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to select this tool versus the many sibling workflow calculators (e.g., run_compound_funding, run_funding_arbitrage). No mention of prerequisites or assumptions (e.g., whether fundingRate should be annualized or per-period).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_liquidation_safetyAInspect
Calculate the liquidation price for an isolated-margin futures position. Use when user asks "where will I get liquidated?" or "how close is my liq price?". Returns: liquidationPrice, distancePct (how far from entry).
| Name | Required | Description | Default |
|---|---|---|---|
| mmr | No | Maintenance margin rate, default 0.005 (0.5%) | |
| side | Yes | ||
| leverage | Yes | Leverage multiplier, e.g. 10 for 10x | |
| entryPrice | Yes | Entry price (positive) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. 'Calculate' implies read-only computation, but doesn't explicitly clarify this returns a planning value rather than triggering actual liquidation. No disclosure of calculation methodology, precision, or what happens with extreme leverage inputs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action verb. No redundant words. Every element serves purpose: 'Calculate' (action), 'liquidation price' (output), 'isolated-margin futures position' (scope). Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, yet description doesn't specify return format (price number, currency, precision). With 75% schema coverage and clear domain terminology, adequately complete for a calculation tool, but gap remains on output specification and whether result includes additional safety metrics implied by 'safety' in name.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 75% (3 of 4 params described), which is above the 80% threshold baseline. Description adds no parameter details beyond schema, but 'isolated-margin' in description hints at why 'side' matters (long vs short). Maintains baseline 3 as schema does heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Calculate' with clear resource 'liquidation price' and scope 'isolated-margin futures position'. Distinguishes from siblings like run_breakeven_planning or run_exit_target by focusing specifically on liquidation price calculation rather than general planning.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus other calculation/planning siblings (run_breakeven_planning, run_scenario_planning, etc.). No mention of prerequisites like needing established position parameters or when this calculation is needed (e.g., pre-trade risk assessment).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_max_leverageAInspect
Calculate the maximum safe leverage based on account size, max acceptable drawdown, and asset daily volatility. Use when user asks "what's the max leverage I should use on BTC?" or "how much leverage is safe given 3% daily volatility?". Returns: maxLeverage, marginAtRisk.
| Name | Required | Description | Default |
|---|---|---|---|
| mmr | No | Maintenance margin rate, default 0.005 (0.5%) | |
| accountSize | Yes | Total account size in USDT | |
| volatilityPct | Yes | Expected daily price volatility as percentage, e.g. 3 for 3% | |
| maxDrawdownPct | Yes | Maximum acceptable drawdown as percentage, e.g. 10 for 10% |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full disclosure burden. It compensates partially by documenting return values ('recommended leverage and margin at risk') since no output schema exists. However, it omits idempotency, side effects, or execution constraints expected for a calculation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes inputs and purpose; second sentence discloses return values. Perfectly front-loaded and sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and no output schema, the description adequately covers the tool's scope by mentioning both input logic and return values. Sufficient for a stateless calculation tool, though could explicitly note it is a read-only analysis operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds semantic grouping by labeling the required parameters as 'account size, max acceptable drawdown, and asset daily volatility'—providing financial context beyond raw parameter names. Does not mention the optional 'mmr' parameter, but this is acceptable given schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Calculate' + resource 'maximum safe leverage' + key inputs. Clearly distinguishes from siblings like run_position_sizing (which sizes specific trades) and run_liquidation_safety (which checks existing positions).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to select this over similar workflow tools like run_position_sizing or run_pre_trade_check. The specific input requirements provide implicit context, but lacks explicit 'when to use/when not to use' direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_pnl_planningBInspect
Calculate net PnL, ROE, fees and gross profit/loss for a futures trade. Use when user asks "what's my profit/loss on this trade?" Returns: grossPnl, fees, netPnl, netPnlUsdt, roe (%).
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | Trade direction | |
| size | Yes | Position size in base asset | |
| exitPrice | Yes | Exit price (positive) | |
| entryPrice | Yes | Entry price (positive) | |
| feeOpenPct | No | Opening fee as fraction, e.g. 0.0002 = 0.02% | |
| feeClosePct | No | Closing fee as fraction |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry full disclosure burden. While it lists calculated outputs (net PnL, ROE, fees), it fails to disclose whether this is a read-only calculation or triggers side effects, expected response format, error conditions, or whether inputs are validated (e.g., exitPrice > 0 constraint mentioned in schema but not behavioral implications).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, densely packed with specific calculation targets. No redundancy or filler. Front-loaded with action verb and subject. Appropriate length for the complexity level.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% schema coverage and no output schema, description adequately covers inputs but only enumerates expected return values (net PnL, ROE, etc.) without specifying structure. Given it's a workflow tool (typically implying process execution) with no annotations, description should clarify it's a deterministic calculation engine, not a state-changing operation, to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (all 6 parameters documented), establishing baseline 3. Description adds 'futures trade' context which semantically frames entryPrice/exitPrice for derivatives, but does not elaborate parameter interactions (e.g., how feeOpenPct interacts with size) or provide format examples beyond what schema already contains.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (Calculate) and resource (futures trade) clearly stated. Lists specific outputs (net PnL, ROE, fees, gross profit/loss) which distinguishes it from sibling workflow tools like run_breakeven_planning or run_exit_target. However, lacks explicit differentiation text (e.g., 'use this when you need realized PnL vs breakeven analysis'), preventing a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description states what the tool does but provides no guidance on when to invoke it versus sibling workflow tools (e.g., run_breakeven_planning, run_exit_target, run_scenario_planning). No prerequisites, triggering conditions, or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_position_sizingCInspect
Calculate the correct position size given a maximum risk in USDT and a stop-loss price. Use when user asks "how many coins should I buy?" or "size my position so I risk exactly $X". Returns: positionSize (base), positionUsdt, marginRequired.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | ||
| leverage | No | Leverage, default 1 | |
| riskUsdt | Yes | Maximum acceptable loss in USDT | |
| stopLoss | Yes | Stop-loss price | |
| entryPrice | Yes | Entry price | |
| feeOpenPct | No | Opening fee fraction, default 0.0002 | |
| feeClosePct | No | Closing fee fraction, default 0.0005 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden but fails to state this is a pure calculation (read-only), doesn't explain the calculation methodology, and omits mention of optional fee/leverage adjustments despite their presence in the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at 12 words with front-loaded action verb. However, may err toward under-specification given the tool's complexity (7 parameters, financial domain) and lack of supporting annotations or output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a financial calculation tool with 7 parameters and no output schema. Omits critical context: that it handles both long/short sides, accounts for optional fees/leverage, and critically—what value it returns (position size in units, margin required, etc.).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is high (86%), establishing baseline 3. The description adds semantic context linking 'riskUsdt' and 'stopLoss' parameters to the calculation purpose, but provides no additional color for side, entryPrice, leverage, or fee parameters beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Calculate') and resource ('position size'), and implicitly distinguishes from siblings by specifying the unique input combination of 'maximum risk in USDT' and 'stop-loss price' required for this particular calculation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus siblings like 'run_pre_trade_check' or 'run_pnl_planning', nor any mention of prerequisites such as requiring valid entry/exit prices or risk parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_pre_trade_checkAInspect
Full pre-trade decision card: orchestrates position sizing, breakeven, liquidation, and funding cost in one call. Use when user describes a full trade setup and asks "should I take this trade?" or "run the numbers on this setup". Provide exchange+symbol to fetch live funding rate automatically. Returns: positionSize, breakeven, liquidationPrice, fundingCost, overnightBreakevenShift, verdict.
| Name | Required | Description | Default |
|---|---|---|---|
| mmr | No | Maintenance margin rate, default 0.005 | |
| side | Yes | ||
| symbol | No | Perpetual symbol, e.g. "BTCUSDT". | |
| exchange | No | Exchange code, e.g. "binance" or "bybit". Used to fetch live funding rate if funding_rate is omitted. | |
| leverage | Yes | Leverage multiplier | |
| risk_pct | Yes | Risk as % of balance, e.g. 1.0 = 1% | |
| stop_loss | Yes | Stop-loss price (positive) | |
| hold_hours | No | Expected hold time in hours for overnight shift calc. Default 8. | |
| entry_price | Yes | Entry price (positive) | |
| fee_open_pct | No | Opening fee fraction, default 0.0002 | |
| funding_rate | No | Funding rate per 8h as decimal, e.g. 0.0001. If omitted, fetched live from exchange. | |
| fee_close_pct | No | Closing fee fraction, default 0.0005 | |
| account_balance | Yes | Total account balance in USDT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description must carry full burden. It successfully discloses the composite nature (orchestrates multiple tools) and output structure (decision card contents). However, it fails to mention significant behavioral traits: live exchange data fetching dependency (funding rate) mentioned only in parameter schema, network failure modes, or calculation latency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence front-loaded with purpose ('Composite pre-trade check'), immediately followed by inputs, internal mechanics, and outputs. Zero redundancy; every clause adds distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong compensation for missing output schema by detailing the 'structured decision card' contents (position size, breakeven, etc.). Minor gap: no mention of error conditions given the external exchange data dependency, which is relevant for a 13-parameter workflow tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 92%, so baseline 3 applies. Description mentions 'given a single trade setup' as generic context but does not augment specific parameter meanings, formats, or cross-parameter relationships beyond the excellent schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity: 'Composite pre-trade check' establishes the verb and scope, while 'orchestrates risk_sizer + breakeven + liquidation + funding_cost' explicitly distinguishes this composite from siblings like workflow.run_position_sizing and workflow.run_funding_cost by listing the specific components it unifies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through the orchestration list (suggests this is the 'all-in-one' option), but lacks explicit when-to-use guidance versus individual primitive checks or whether it should be called before every trade vs. only for high-risk setups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_risk_rewardAInspect
Full risk:reward analysis — the single best tool when user describes a trade with entry, stop, and target. Calculates R:R ratio, position size, liquidation price, breakeven, and P&L at both stop and target. Returns a verdict: strong (3:1+) / good (2:1+) / marginal / poor. Use when user asks "is this trade worth taking?" or "what's my risk reward on this setup?".
| Name | Required | Description | Default |
|---|---|---|---|
| mmr | No | Maintenance margin rate (default 0.005) | |
| side | Yes | ||
| leverage | Yes | Leverage multiplier | |
| risk_pct | Yes | Max risk as % of account | |
| stop_loss | Yes | Stop-loss price | |
| entry_price | Yes | Entry price | |
| take_profit | Yes | Take-profit price | |
| fee_open_pct | No | Open fee rate (default 0.0002) | |
| fee_close_pct | No | Close fee rate (default 0.0005) | |
| account_balance | Yes | Account balance in USDT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses functional outputs (calculated values and verdict) but lacks behavioral context: no mention of side effects, validation errors for impossible price combinations (e.g., stop-loss beyond entry for wrong side), computational complexity, or whether results are deterministic calculations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently structured as '[Scope]: [Specific outputs]. [Return value].' Front-loads 'Full R:R analysis' immediately establishing domain. Zero redundant text; every clause specifies distinct calculations or the verdict return type.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 10 parameters and no output schema, the description adequately compensates by detailing return values (verdict categories and intermediate calculations). Could improve by noting parameter validation rules or edge cases (e.g., liquidation price below stop-loss), but sufficiently complete for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 90% with clear parameter descriptions. The description adds semantic context by framing 'risk_pct' as driving 'position size' and implying the analytical relationship between inputs (entry, stop, target) and outputs, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'analysis' with comprehensive scope (position size, liquidation price, breakeven, P&L) and distinguishes from siblings by emphasizing the 'verdict' rating output (strong/good/marginal/poor) that aggregates these calculations into a qualitative assessment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus siblings like 'run_position_sizing', 'run_breakeven_planning', or 'run_liquidation_safety' which appear to do subsets of these calculations. The term 'Full R:R analysis' implies comprehensiveness but doesn't state when holistic vs. specific analysis is preferred.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_scale_outAInspect
Scale-out planner: P&L, ROI, and cumulative P&L for each partial exit level. Use when user wants to take profit at multiple targets — "close 30% at $90k, 30% at $95k, 40% at $100k — what's my total P&L?". Returns: per-level pnl, weightedAvgExitPrice, totalRoi.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | ||
| exits | Yes | ||
| total_size | Yes | Total position size in base currency | |
| entry_price | Yes | Entry price | |
| fee_open_pct | No | Open fee rate (default 0.0002) | |
| fee_close_pct | No | Close fee rate (default 0.0005) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure; it indicates the tool is a planner that returns calculated metrics (weighted average exit price, overall ROI), suggesting it is read-only, but omits details about validation logic, side effects, or whether results are cached. The behavior is partially inferred as computational rather than destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure efficiently front-loads the tool's purpose (scale-out planner) and specifies return values without redundancy, though the colon-prefixed label 'Scale-out planner:' partially echoes the tool name. Every sentence contributes distinct information about functionality and outputs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (nested exit structures, fee calculations) and absence of an output schema or annotations, the description adequately covers primary outputs but omits validation constraints (e.g., whether exit percentages must total 100%) and provides minimal detail on parameter interdependencies. It meets baseline needs but leaves operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 67% schema coverage, the description adds crucial semantic context by associating the exits array with 'partial exit level' planning and explaining that fees factor into the P&L calculations mentioned. It clarifies the financial domain purpose of parameters that might otherwise appear as abstract numbers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the tool as a 'Scale-out planner' and specifies it calculates 'P&L, ROI, and cumulative P&L for each partial exit level,' clearly distinguishing it from entry-focused siblings like run_dca_entry or single-exit tools. The explicit mention of 'partial exit level' precisely defines the resource being modeled.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the term 'Scale-out' implies usage for multi-level exit strategies, the description provides no explicit guidance on when to prefer this over run_exit_target or run_breakeven_planning, nor does it mention prerequisites like ensuring exit percentages sum to 100%. Usage must be inferred from domain terminology.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
workflow.run_scenario_planningBInspect
Run a scenario analysis: compute PnL for multiple price-change percentages at once. Use when user asks "show me my P&L if BTC moves -10%, -5%, +5%, +10%". Returns: array of { deltaPct, exitPrice, netPnl, roe }.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | ||
| size | Yes | Position size in base asset | |
| deltasPct | Yes | List of price change percentages, e.g. [-10, -5, 0, 5, 10] | |
| entryPrice | Yes | Entry price | |
| feeOpenPct | No | Opening fee fraction | |
| feeClosePct | No | Closing fee fraction |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the computational purpose (calculating PnL for various deltas) but omits operational details like output format, side effects, idempotency, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core concept ('scenario analysis') and immediately specifies the operation ('compute PnL'). No words are wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the well-documented schema (83% coverage) and lack of output schema, the description adequately covers the tool's purpose. However, for a financial calculation tool with 6 parameters and no annotations, it could benefit from hints about output structure or valid percentage ranges.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 83% schema description coverage, the baseline is 3. The description adds semantic context by clarifying that 'deltasPct' represents price-change percentages for scenario analysis, but does not elaborate on the optional fee parameters or the 'side' enum beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'scenario analysis' and 'compute PnL' for 'multiple price-change percentages at once,' providing specific verb and resource context. However, it does not explicitly differentiate from the similar sibling 'workflow.run_pnl_planning' or other workflow tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'at once' implies this is for batch scenario processing, but there is no explicit guidance on when to use this tool versus alternatives like 'run_pnl_planning' or 'run_breakeven_planning', nor when to avoid it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!