kit
Server Details
Non-custodial execution primitives for DeFi on Solana. 1 bps to open. Everything else is free.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- toreva/kit
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 20 of 20 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but there is some overlap between toreva_execute and toreva_earn/toreva_perps_long/toreva_perps_short, as execute seems to cover multiple strategies while others are specific. The perps tools are well-differentiated, with clear actions like long, short, close, and query functions. Overall, descriptions help clarify boundaries, but a few tools could cause mild confusion.
Tool names follow a highly consistent pattern: all use snake_case with a 'toreva_' prefix, followed by a verb or noun phrase (e.g., configure, earn, execute, perps_long). This consistency makes the set predictable and easy to navigate, with no deviations in style or structure across the 20 tools.
With 20 tools, the count is slightly high but reasonable for the domain of DeFi and perpetual futures on Solana, which involves multiple operations like execution, simulation, querying, and configuration. It covers a broad scope without being excessive, though it might feel heavy compared to simpler servers.
The tool set provides comprehensive coverage for Toreva's domain, including configuration, execution, simulation, explanation, and querying across earn, stake, balance, and perpetual futures. It supports full CRUD-like operations for perps (open, close, manage margin) and read-only queries, with no obvious gaps that would hinder agent workflows.
Available Tools
20 toolstoreva_configureCInspect
Configure user preferences for Toreva strategy execution. Set default constraints, preferred protocols, risk tolerance, and notification preferences.
| Name | Required | Description | Default |
|---|---|---|---|
| configKey | Yes | Configuration key to set | |
| configValue | No | Configuration value | |
| walletAddress | No | Solana wallet address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a configuration tool ('set'), implying a write/mutation operation, but doesn't disclose critical traits: whether changes are persistent, require authentication, have side effects, or involve rate limits. For a mutation tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently states the purpose and provides concrete examples without unnecessary words. It's front-loaded with the core action ('configure user preferences') and uses a list for clarity, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a configuration tool with 3 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain what happens after configuration (e.g., success response, error cases), the scope of changes (user-specific vs. global), or how it integrates with other Toreva tools. For a mutation tool without structured safety or output info, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters (configKey, configValue, walletAddress). The description adds some context by listing examples of what can be configured (constraints, protocols, etc.), which loosely maps to configKey/configValue, but doesn't provide specific syntax, formats, or relationships beyond what the schema offers. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('configure') and resource ('user preferences for Toreva strategy execution') with specific examples of what can be configured (constraints, protocols, risk tolerance, notifications). It distinguishes itself from siblings like 'toreva_execute' or 'toreva_simulate' by focusing on configuration rather than execution or analysis. However, it doesn't explicitly differentiate from potential configuration-related siblings that might not exist in the list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., whether a wallet must be set up first), nor does it compare to other tools like 'toreva_earn' or 'toreva_strategies' for related preferences. The context is implied as part of Toreva strategy setup, but no explicit usage rules are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_earnAInspect
Deploy idle USDC to the highest-yield DeFi lending strategy on Solana. Risk-ranked across verified Day 1 venues (Kamino, Save) — not just one protocol. Scan, simulate projected returns with fees, and execute via user-signable transaction. Returns structured receipt (what, why, cost, next). Non-custodial. User-approved. Venue intelligence fee: 4 bps ($4 on $10K). Use when capital is idle or directional strategies underperform.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Token to deploy into DeFi yield strategy. Currently USDC on Solana. | |
| amount | No | Amount of token to deploy. Required for simulate and execute. | |
| wallet | No | Solana wallet address (base58). Required for execute — transaction is built for this wallet. | |
| operation | Yes | scan = discover and rank yield venues; simulate = preview projected returns, fees, and strategy details; execute = build a user-signable transaction for the best strategy | |
| constraints | No | Optional strategy parameters constraining venue selection |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: non-custodial nature, user-approved execution, fee structure (4 bps), risk-ranking across venues, and that it returns a structured receipt. It could improve by mentioning potential risks like smart contract vulnerabilities or liquidity issues.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose, followed by key features and usage guidelines. Every sentence adds value, though it could be slightly more concise by combining some clauses about fees and execution.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (DeFi operations with multiple modes), no annotations, and no output schema, the description does well to cover purpose, usage, behavioral aspects, and constraints. It could be more complete by briefly mentioning what the 'structured receipt' contains or potential error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds some context about the operation modes (scan, simulate, execute) and mentions constraints like 'risk-ranked across verified Day 1 venues,' but doesn't provide additional parameter details beyond what's already well-documented in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('deploy idle USDC to the highest-yield DeFi lending strategy on Solana'), resource (USDC), and scope (across verified Day 1 venues like Kamino and Save). It distinguishes from siblings by focusing on yield optimization rather than configuration, execution of specific trades, or scanning/simulating alone.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('Use when capital is idle or directional strategies underperform') and distinguishes from alternatives by mentioning it's 'not just one protocol' and includes scanning, simulation, and execution in one tool. This provides clear context for selection versus other tools like toreva_scan or toreva_simulate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_executeAInspect
Execute a Day 1 Toreva strategy on Solana. Public scope is exactly three products: Earn, Stake, and Balance. Runtime strategy ids are normalized behind public product ids and hidden strategy toggles are rejected. Non-custodial. Every action receipted.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | No | Optional token symbol. Day 1 public assets are USDC, SOL, and MEW. | |
| amount | No | Amount in USD or token units | |
| constraints | No | Optional execution constraints | |
| description | Yes | What you want to do, in natural language | |
| walletAddress | Yes | Solana wallet address to execute for | |
| strategyKeyword | No | Optional public product id or runtime strategy id. Supported Day 1 values map to Earn, Stake, and Balance only. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds valuable context: it specifies the platform (Solana), mentions non-custodial nature, indicates that actions are receipted, and describes constraints like rejected hidden toggles. However, it doesn't cover potential side effects, error handling, or performance characteristics like rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in four sentences that each add value: execution context, product scope, runtime constraints, and system characteristics. There's no wasted verbiage, though it could be slightly more front-loaded by leading with the core action more prominently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex execution tool with 6 parameters, nested objects, and no output schema, the description provides adequate context about what the tool does and its constraints. However, without annotations or output schema, it should ideally explain more about what happens after execution - what 'receipted' means, typical response format, or error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it only implies that 'strategyKeyword' maps to the three public products. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Execute a Day 1 Toreva strategy on Solana') and specifies the scope ('Public scope is exactly three products: Earn, Stake, and Balance'), which distinguishes it from sibling tools like 'toreva_configure' or 'toreva_simulate'. However, it doesn't explicitly differentiate from 'toreva_earn' or 'toreva_strategies' which might overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'Day 1 Toreva strategy' and listing specific products, but doesn't provide explicit guidance on when to use this tool versus alternatives like 'toreva_earn' or 'toreva_strategies'. It mentions 'hidden strategy toggles are rejected' which hints at limitations, but lacks clear when-to-use or when-not-to-use directives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_explainCInspect
Explain an existing intent, receipt, or strategy in plain language. Provides breakdown of what happened, fees charged, and reasoning behind venue/strategy selection.
| Name | Required | Description | Default |
|---|---|---|---|
| intentId | No | Intent ID to explain | |
| receiptId | No | Receipt ID to explain | |
| description | No | What you want explained, in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the tool provides a 'breakdown of what happened, fees charged, and reasoning behind venue/strategy selection,' which gives some behavioral context about the output content. However, it doesn't disclose critical traits like whether this is a read-only operation, if it requires specific permissions, or any rate limits—important for a tool that might access sensitive financial data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured in two sentences: the first states the purpose, and the second details the output content. Every sentence adds value without redundancy. It could be slightly improved by front-loading more critical information, but it's efficient overall.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (explaining financial intents/receipts/strategies), no annotations, and no output schema, the description is moderately complete. It covers the purpose and output breakdown but lacks details on behavioral traits, usage context, and return format. This is adequate as a minimum viable description but has clear gaps for effective tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters (intentId, receiptId, description) with clear descriptions. The tool description adds no additional parameter semantics beyond what's in the schema, such as how these parameters interact or examples of usage. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Explain an existing intent, receipt, or strategy in plain language.' It specifies the verb ('explain') and the resources (intent, receipt, strategy) with the output format ('plain language'). However, it doesn't explicitly differentiate from sibling tools like 'toreva_perps_explain' or 'toreva_simulate' which might have overlapping explanatory functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions what gets explained (intent, receipt, strategy) but doesn't specify prerequisites, context for usage, or exclusions. With sibling tools like 'toreva_perps_explain' and 'toreva_simulate', this lack of differentiation is a significant gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_add_marginAInspect
Add margin to a perpetual futures position. Executes at the position's venue. Free — no Toreva fee. Non-custodial.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Margin token (e.g. USDC). | |
| venue | Yes | Venue the position is on (drift, jupiter-perps, pacifica, or flash). | |
| amount | Yes | Margin amount to add or remove. | |
| positionId | Yes | Position identifier returned from open verb. | |
| walletAddress | Yes | Solana wallet address (base58). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and adds valuable behavioral context: it discloses that execution is 'Free — no Toreva fee' (cost structure) and 'Non-custodial' (security model). However, it doesn't mention potential side effects like position liquidation risks, transaction confirmation requirements, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with zero waste: first states core functionality, second adds execution context, third provides cost and security information. Every sentence earns its place by adding distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial transaction tool with 5 required parameters and no annotations or output schema, the description provides adequate core information but lacks details about return values, error conditions, or transaction lifecycle. The cost and security disclosures help, but more behavioral context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Add margin'), target resource ('to a perpetual futures position'), and execution context ('Executes at the position's venue'). It distinguishes from sibling 'toreva_perps_remove_margin' by specifying 'add' rather than 'remove' margin.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('perpetual futures position', 'position's venue') but doesn't explicitly state when to use this tool versus alternatives like 'toreva_perps_remove_margin' or other position management tools. No explicit exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_cancel_orderAInspect
Cancel an open perpetual futures order. Executes at the position's venue. Free — no Toreva fee. Non-custodial.
| Name | Required | Description | Default |
|---|---|---|---|
| venue | Yes | Venue the position is on (drift, jupiter-perps, pacifica, or flash). | |
| positionId | Yes | Position identifier returned from open verb. | |
| walletAddress | Yes | Solana wallet address (base58). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates key traits: it's a cancellation operation (implies mutation), executes at a specific venue, is free (no Toreva fee), and is non-custodial (wallet control). However, it doesn't mention potential side effects like partial fills, confirmation requirements, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with four brief, information-dense phrases. Each phrase adds value: the core action, execution context, cost information, and custody model. There's zero wasted language and it's perfectly front-loaded with the primary function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description does well by covering the action, execution venue, cost structure, and custody model. However, it lacks information about return values, error conditions, or confirmation requirements, which would be helpful given the tool's financial nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds no additional parameter information beyond what's in the schema, maintaining the baseline score of 3 for adequate but not enhanced parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Cancel') and resource ('an open perpetual futures order'), distinguishing it from siblings like 'toreva_perps_close' (which likely closes positions rather than cancels orders) and 'toreva_perps_long/short' (which open positions). It precisely identifies the tool's function without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'open perpetual futures order' and 'position's venue', but does not explicitly state when to use this tool versus alternatives like 'toreva_perps_close' or other cancellation methods. It provides basic prerequisites but lacks explicit guidance on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_closeAInspect
Close a perpetual futures position. Executes at the position's venue. Free — no Toreva fee. Non-custodial. Execution only.
| Name | Required | Description | Default |
|---|---|---|---|
| venue | Yes | Venue the position is on (drift, jupiter-perps, pacifica, or flash). | |
| positionId | Yes | Position identifier returned from open verb. | |
| walletAddress | Yes | Solana wallet address (base58). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond the input schema by stating 'Free — no Toreva fee', 'Non-custodial', and 'Execution only', which informs about cost, custody model, and scope. However, it does not cover potential side effects like market impact, confirmation times, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, with four brief phrases that each add distinct value: the core action, execution venue, cost, custody model, and scope. There is no redundant or wasted language, making it efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a financial transaction tool with no annotations and no output schema, the description is moderately complete. It covers key behavioral aspects like fees and custody, but lacks details on return values, error handling, or execution guarantees. For a tool that closes positions, more context on outcomes would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, providing clear documentation for all three parameters (venue, positionId, walletAddress). The description does not add any parameter-specific details beyond what the schema already states, so it meets the baseline of 3 without compensating for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Close a perpetual futures position') and resource ('position'), distinguishing it from sibling tools like toreva_perps_long, toreva_perps_short, toreva_perps_add_margin, and toreva_perps_remove_margin which handle different position operations. The verb 'Close' is precise and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'Executes at the position's venue' and 'Execution only', suggesting this is for closing existing positions rather than opening or modifying them. However, it lacks explicit guidance on when to use this tool versus alternatives like toreva_perps_cancel_order or toreva_perps_funding_settle, and does not specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_explainAInspect
Explain a perpetual futures position or trade. Returns venue used, entry price, fees paid, funding rate at entry, current P&L if open, and why the venue was selected. Free. Use after execution to understand what happened and why.
| Name | Required | Description | Default |
|---|---|---|---|
| venue | No | Venue context (drift, jupiter-perps, pacifica, or flash). | |
| positionId | No | Position ID to explain. | |
| txSignature | No | Transaction signature to explain. | |
| walletAddress | No | Solana wallet address (base58). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is 'Free' (cost aspect) and describes the return content (venue, entry price, fees, etc.), which is helpful. However, it doesn't cover important behavioral aspects like rate limits, authentication requirements, error conditions, or whether this is a read-only operation (though 'explain' implies non-destructive).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with three sentences that each serve a purpose: stating the tool's function, listing return values, and providing usage guidance. It's front-loaded with the core purpose. The 'Free' mention could be integrated more smoothly, but overall it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of perpetual futures trading and the absence of both annotations and output schema, the description provides adequate but incomplete context. It covers the purpose, return content, and usage timing, but lacks details about authentication, error handling, rate limits, and the exact format of returned information. For a financial tool with 4 parameters, more behavioral context would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 4 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions 'venue' and 'position' concepts generally but doesn't provide additional context about parameter usage or relationships.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Explain a perpetual futures position or trade' with specific details about what information is returned (venue, entry price, fees, etc.). It distinguishes itself from other 'explain' tools by focusing on perpetual futures positions/trades, but doesn't explicitly differentiate from sibling tools like 'toreva_explain' or 'toreva_perps_query_position'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool: 'Use after execution to understand what happened and why.' It specifies the timing (post-execution) and purpose (understanding outcomes and rationale). However, it doesn't mention when NOT to use it or explicitly compare it to alternatives like 'toreva_perps_query_position' or 'toreva_explain'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_funding_settleAInspect
Settle funding payments on a perpetual futures position. Executes at the position's venue. Free — no Toreva fee. Non-custodial.
| Name | Required | Description | Default |
|---|---|---|---|
| venue | Yes | Venue the position is on (drift, jupiter-perps, pacifica, or flash). | |
| positionId | Yes | Position identifier returned from open verb. | |
| walletAddress | Yes | Solana wallet address (base58). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It effectively describes key behavioral traits: it's a settlement operation (implies mutation), executes at the position's venue, has no Toreva fee, and is non-custodial. However, it doesn't disclose potential side effects, rate limits, or authentication requirements beyond what's implied by the wallet address parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise at three short sentences with zero wasted words. Each sentence adds distinct value: the core functionality, execution context, and key behavioral traits (free, non-custodial). It's front-loaded with the most important information first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description does well by covering the core action, execution context, and key behavioral traits (free, non-custodial). However, it doesn't describe what happens after settlement, potential errors, or return values. Given the complexity of financial settlement operations, additional context about outcomes would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema descriptions. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Settle funding payments') on a specific resource ('perpetual futures position') with execution context ('at the position's venue'). It distinguishes from siblings like 'toreva_perps_query_funding' (which queries rather than settles) and 'toreva_perps_close' (which closes positions rather than settles funding).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool: for settling funding payments on perpetual futures positions. It implicitly distinguishes from alternatives like 'toreva_perps_query_funding' (for checking funding rates) but doesn't explicitly state when NOT to use it or name specific alternatives. The 'Free — no Toreva fee' and 'Non-custodial' statements provide additional usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_longAInspect
Open a long perpetual futures position on Solana. Routes to better execution across Jupiter Perps, Pacifica, Drift, and Flash Trade — compares fees, funding rate, and available liquidity, then routes to whichever venue offers a better fill. 1 bps execution fee on notional. Trades routed to Drift receive a 5% fee discount vs going direct. Your agent decides direction, size, and leverage. Toreva handles venue selection and transaction construction. Non-custodial. Every execution receipted. Execution only — not financial advice.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Token to take a position on (e.g. SOL, BTC, ETH). | |
| sizeUsd | Yes | Position size in USD notional. | |
| leverage | Yes | Leverage multiplier (e.g. 5 for 5x). | |
| walletAddress | Yes | Solana wallet address (base58). | |
| maxSlippageBps | No | Maximum acceptable slippage in basis points. | |
| collateralToken | Yes | Token used as collateral (e.g. USDC, SOL). | |
| collateralAmount | Yes | Amount of collateral token to post. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: execution fee structure (1 bps), Drift fee discount (5%), routing logic (compares fees, funding rate, liquidity), non-custodial nature, execution receipting, and the agent's vs. Toreva's responsibilities. It doesn't mention rate limits, error handling, or transaction confirmation details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. Most sentences earn their place by providing important behavioral context, though the 'Execution only — not financial advice' could be considered slightly redundant given the tool's name clearly indicates execution functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex financial execution tool with 7 parameters and no annotations or output schema, the description provides substantial context about execution mechanics, fees, routing logic, and responsibilities. It covers the critical 'how it works' aspects well, though doesn't explain return values or error scenarios, which would be helpful given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description doesn't add specific parameter semantics beyond what's in the schema, though it does mention 'direction, size, and leverage' which maps to token, sizeUsd, and leverage parameters, and 'wallet address' which maps to walletAddress. No additional format or constraint details are provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Open a long perpetual futures position on Solana') and distinguishes it from siblings by specifying it's for long positions only (vs. toreva_perps_short for short positions). It also mentions the routing mechanism across multiple venues, which differentiates it from basic execution tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('Your agent decides direction, size, and leverage. Toreva handles venue selection and transaction construction'), but doesn't explicitly state when NOT to use it or mention specific alternatives like toreva_perps_short for short positions or toreva_perps_close for closing positions. It does distinguish from financial advice tools by stating 'Execution only — not financial advice.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_query_fundingAInspect
Query current funding rates across Jupiter Perps, Pacifica, Drift, and Flash Trade for a token. Read-only. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Token to query funding rates for (e.g. SOL, BTC). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context by stating it is 'Read-only' (indicating safety) and 'Free' (implying no cost or rate limit concerns), which are crucial behavioral traits beyond the basic query function. However, it lacks details on response format or potential errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded, consisting of a single sentence that efficiently conveys purpose, scope, and key behavioral traits ('Read-only. Free.'). Every word earns its place without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is reasonably complete for a read-only query. However, it could improve by hinting at the return format (e.g., rates per venue) or error conditions, which would aid the agent in handling responses.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the token parameter well-documented in the schema itself. The description adds no additional parameter semantics beyond what the schema provides, such as token format examples or validation rules, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Query current funding rates') and resources ('across Jupiter Perps, Pacifica, Drift, and Flash Trade for a token'), distinguishing it from sibling tools like toreva_perps_query_markets or toreva_perps_query_position. It explicitly identifies the scope of data sources and the target resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to get funding rates for a token across specific venues), but it does not explicitly state when not to use it or name alternatives. For example, it doesn't differentiate from toreva_perps_query_markets, which might provide different perps-related data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_query_marketsAInspect
List available perpetual futures markets. Read-only. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| venue | No | Filter by venue. Omit for all venues. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and adds valuable behavioral context: it declares the operation as 'Read-only' (indicating no mutations) and 'Free' (suggesting no cost or rate limits). This compensates well for the lack of structured annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with three short phrases, front-loaded with the core purpose. Every word earns its place, with no wasted sentences or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, no output schema), the description is adequate but minimal. It covers purpose and basic behavior but lacks details on output format, pagination, or error handling, which could be helpful despite the simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description doesn't mention any parameters, but the input schema has 100% coverage with a clear description for the 'venue' parameter. Since schema coverage is high, the baseline score of 3 is appropriate, as the description adds no extra parameter semantics beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('available perpetual futures markets'), making the purpose specific and unambiguous. It distinguishes from siblings by focusing on markets rather than positions, venues, or trading actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing markets, with 'Read-only' and 'Free' suggesting it's safe and cost-free. However, it doesn't explicitly state when to use this tool versus alternatives like 'toreva_perps_query_venues' or 'toreva_perps_query_position', missing explicit sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_query_positionAInspect
Query open perpetual futures positions for a wallet. Read-only. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| venue | No | Filter by venue (drift, jupiter-perps, pacifica, or flash). Omit for all. | |
| walletAddress | Yes | Solana wallet address (base58). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explicitly states 'Read-only' (indicating no mutations) and 'Free' (implying no cost or rate limits), which are crucial behavioral traits not covered by the schema. It doesn't mention response format or error handling, but for a read-only query tool, this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (one sentence with three clauses) and front-loaded with the core purpose. Every word earns its place: 'Query open perpetual futures positions' (action), 'for a wallet' (target), 'Read-only' (behavior), and 'Free' (cost). No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (read-only query with 2 parameters), no annotations, and no output schema, the description is reasonably complete. It covers purpose, behavior (read-only, free), and target, but lacks details on output format or error cases, which would be helpful for an agent to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (venue and walletAddress). The description doesn't add any parameter-specific details beyond what's in the schema, such as explaining the relationship between venue filtering and wallet queries. Baseline 3 is appropriate when the schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Query open perpetual futures positions'), target resource ('for a wallet'), and scope ('open perpetual futures positions'), distinguishing it from siblings like toreva_perps_query_markets or toreva_perps_query_funding. It uses precise terminology that aligns with the tool's domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'open perpetual futures positions' and 'for a wallet', which helps differentiate it from other query tools (e.g., toreva_perps_query_markets queries markets, not positions). However, it lacks explicit guidance on when to use alternatives or any exclusions, such as not being suitable for closed positions or other asset types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_query_venuesAInspect
List available perps venues with fee structures. Read-only. Free.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond the input schema by stating 'Read-only' (indicating no mutations) and 'Free' (implying no cost or rate limit concerns). This covers key behavioral traits like safety and accessibility, though it doesn't detail response format or potential errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded: a single sentence states the core purpose, followed by two brief clarifiers ('Read-only. Free.'). Every element earns its place by providing essential information without waste, making it efficient for an AI agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema) and the description's coverage of purpose and key behaviors, it is minimally adequate. However, without annotations or an output schema, it could benefit from more detail on the return format (e.g., list structure or data fields) to fully guide the agent, leaving some gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on the tool's purpose and behavior. A baseline of 4 is applied since no parameters exist, and the description adds value without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List available perps venues with fee structures.' It specifies the verb ('List'), resource ('perps venues'), and scope ('with fee structures'), making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'toreva_perps_query_markets' or 'toreva_perps_query_funding', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context through 'Read-only' and 'Free,' suggesting it's a safe, cost-free query operation. However, it lacks explicit guidance on when to use this tool versus alternatives like 'toreva_perps_query_markets' or 'toreva_perps_query_funding,' leaving room for ambiguity in sibling tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_remove_marginAInspect
Remove margin from a perpetual futures position. Executes at the position's venue. Free — no Toreva fee. Non-custodial.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Margin token (e.g. USDC). | |
| venue | Yes | Venue the position is on (drift, jupiter-perps, pacifica, or flash). | |
| amount | Yes | Margin amount to add or remove. | |
| positionId | Yes | Position identifier returned from open verb. | |
| walletAddress | Yes | Solana wallet address (base58). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context about execution venue ('Executes at the position's venue'), cost structure ('Free — no Toreva fee'), and custody model ('Non-custodial'). However, it doesn't describe what happens after margin removal, potential risks, or response format, leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with three short sentences that each add distinct value: the core action, execution context, and cost/custody information. There's zero wasted language, and it's front-loaded with the most important information first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial transaction tool with 5 required parameters and no annotations or output schema, the description provides good basic context but lacks information about what happens after execution, potential errors, or return values. The 'Non-custodial' disclosure is helpful, but more behavioral context would be beneficial given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all 5 parameters. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Remove margin') on a specific resource ('perpetual futures position'), distinguishing it from sibling tools like 'toreva_perps_add_margin' which performs the opposite operation. It provides a complete verb+resource+scope statement that is unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('Remove margin from a perpetual futures position') and mentions it's 'Free — no Toreva fee' which adds practical usage context. However, it doesn't explicitly state when NOT to use it or provide alternatives, though the sibling tool 'toreva_perps_add_margin' is clearly the opposite operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_shortAInspect
Open a short perpetual futures position on Solana. Routes to better execution across Jupiter Perps, Pacifica, Drift, and Flash Trade. 1 bps execution fee on notional. Trades routed to Drift receive a 5% fee discount vs going direct. Your agent decides direction, size, and leverage. Toreva handles venue selection and transaction construction. Non-custodial. Execution only — not financial advice.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Token to take a position on (e.g. SOL, BTC, ETH). | |
| sizeUsd | Yes | Position size in USD notional. | |
| leverage | Yes | Leverage multiplier (e.g. 5 for 5x). | |
| walletAddress | Yes | Solana wallet address (base58). | |
| maxSlippageBps | No | Maximum acceptable slippage in basis points. | |
| collateralToken | Yes | Token used as collateral (e.g. USDC, SOL). | |
| collateralAmount | Yes | Amount of collateral token to post. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: routing across multiple venues (Jupiter Perps, Pacifica, Drift, Flash Trade), execution fee details (1 bps, 5% discount for Drift), non-custodial nature, and that it handles venue selection/transaction construction. It doesn't cover rate limits, error conditions, or response format, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero wasted sentences. It front-loads the core purpose, then provides execution details, fee information, role clarification, and disclaimers. Each sentence adds distinct value: routing, fees, agent responsibilities, Toreva's role, and legal context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex financial tool with 7 parameters and no annotations/output schema, the description provides good context about execution mechanics, fees, and operational boundaries. It covers the 'how' (routing, fee structure) and 'what' (short position opening) well, though doesn't explain return values or potential failure modes. Given the complexity, it's substantially complete but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description doesn't add specific parameter semantics beyond what the schema provides, though it mentions 'direction, size, and leverage' which aligns with parameters but doesn't provide additional clarification. The schema already documents all 7 parameters thoroughly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Open a short perpetual futures position'), resource ('on Solana'), and distinguishes from siblings like 'toreva_perps_long' by specifying the short direction. It goes beyond the name/title by explaining the routing mechanism and execution details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'Your agent decides direction, size, and leverage' and mentioning it's 'Execution only', but doesn't explicitly state when to use this vs alternatives like 'toreva_perps_long' or 'toreva_perps_close'. It provides some operational context but lacks explicit comparative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_perps_simulateAInspect
Simulate opening a perpetual futures position without executing. Compares Jupiter Perps, Pacifica, Drift, and Flash Trade on fees, funding rate, and liquidity. Returns projected entry price, fees, funding cost, and venue comparison. Free — no execution, no fee. Use before perps_long or perps_short to preview what would happen.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Token for the simulated position (e.g. SOL, BTC, ETH). | |
| sizeUsd | Yes | Position size in USD notional. | |
| leverage | Yes | Leverage multiplier. | |
| direction | Yes | Position direction. | |
| walletAddress | No | Solana wallet address (base58). | |
| collateralToken | Yes | Collateral token (e.g. USDC, SOL). | |
| collateralAmount | Yes | Amount of collateral. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates key behavioral traits: it's a simulation-only tool with no execution or fees, it compares multiple venues, and it returns specific projected metrics (entry price, fees, funding cost, venue comparison). However, it doesn't mention potential limitations like simulation accuracy, timeouts, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise with zero wasted words. The first sentence establishes the core purpose, the second explains what it compares and returns, the third clarifies cost/execution status, and the fourth provides usage guidance. Every sentence earns its place and information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simulation tool with no annotations and no output schema, the description does an excellent job covering purpose, usage context, and behavioral characteristics. It clearly explains what the tool does, when to use it, and what to expect. The main gap is the lack of output format details (though it mentions what metrics are returned, not their structure) and potential simulation limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so all parameters are well-documented in the schema itself. The description doesn't add any additional parameter semantics beyond what's already in the schema descriptions. It mentions the comparison across venues but doesn't explain how parameters like token or collateralToken affect this comparison. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Simulate opening a perpetual futures position without executing'), the resource (perpetual futures position), and distinguishes it from siblings like perps_long and perps_short by emphasizing it's a simulation with no execution. It explicitly mentions comparing multiple venues (Jupiter Perps, Pacifica, Drift, Flash Trade) on specific metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use before perps_long or perps_short to preview what would happen') and when not to use it ('Free — no execution, no fee'), clearly positioning it as a preparatory step before actual execution tools. It names specific alternatives (perps_long, perps_short) for actual execution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_scanCInspect
Scan a Solana wallet for Day 1 Toreva opportunities and risk flags across Earn, Stake, and Balance. Non-custodial.
| Name | Required | Description | Default |
|---|---|---|---|
| walletAddress | Yes | Solana wallet address to scan |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Non-custodial,' which hints at security, but fails to describe critical traits such as whether this is a read-only operation, potential rate limits, data freshness, or what the scan output entails. This leaves significant gaps for a tool that likely queries external data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads key information (scanning for opportunities and risk flags) without unnecessary details. Every word contributes to understanding the tool's purpose, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of scanning a wallet for opportunities and risks, with no annotations and no output schema, the description is insufficient. It lacks details on behavioral aspects like safety, performance, or output format, leaving the agent under-informed for proper invocation in a real-world context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'walletAddress' well-documented in the schema. The description adds no additional meaning beyond implying the address is for a Solana wallet, which is already clear from the schema's description. This meets the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Scan') and resource ('Solana wallet') with specific scope ('Day 1 Toreva opportunities and risk flags across Earn, Stake, and Balance'), making the purpose evident. It distinguishes from siblings like 'toreva_configure' or 'toreva_execute' by focusing on scanning rather than configuration or execution, though it doesn't explicitly contrast with 'toreva_simulate' or 'toreva_strategies'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'toreva_simulate' or 'toreva_strategies', nor does it mention prerequisites or exclusions. It implies usage for scanning wallets but lacks explicit context for tool selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_simulateBInspect
Simulate a Day 1 Toreva strategy without committing funds. Public scope is exactly Earn, Stake, and Balance on Solana. Returns projected venue selection, fees, and routing details for the locked Day 1 contract.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | No | Optional token symbol. Day 1 public assets are USDC, SOL, and MEW. | |
| amount | No | Amount in USD or token units | |
| description | Yes | What you want to simulate, in natural language | |
| walletAddress | No | Solana wallet address (optional for simulation) | |
| strategyKeyword | No | Optional public product id or runtime strategy id. Supported Day 1 values map to Earn, Stake, and Balance only. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that it's a simulation (non-destructive) and specifies the public scope (Earn, Stake, Balance on Solana), but lacks details on permissions, rate limits, error handling, or what 'projected venue selection, fees, and routing details' entail. For a tool with no annotations, this is insufficient behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and scope, followed by return details. It's efficient with minimal waste, though it could be slightly more structured (e.g., separating scope from returns).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic purpose and scope but lacks details on behavioral traits, error cases, or return format specifics. For a simulation tool with 5 parameters and no structured output documentation, it's minimally adequate but has clear gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 5 parameters. The description adds some context by mentioning 'public scope is exactly Earn, Stake, and Balance on Solana', which relates to asset and strategyKeyword parameters, but doesn't provide additional syntax or format details beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'simulates a Day 1 Toreva strategy without committing funds', specifying the action (simulate), resource (Day 1 Toreva strategy), and scope (public scope: Earn, Stake, Balance on Solana). It distinguishes from siblings like toreva_execute (which commits funds) but doesn't explicitly differentiate from toreva_perps_simulate or other simulation tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for simulation before committing funds, with 'without committing funds' suggesting it's for planning. However, it doesn't explicitly state when to use this vs. alternatives like toreva_execute (for actual execution) or toreva_perps_simulate (for perps simulation), nor does it mention prerequisites or exclusions beyond the public scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toreva_strategiesAInspect
Browse toreva's locked Day 1 strategy delivery contract for Solana. Public scope is exactly three products: Earn, Stake, and Balance. Returns each product with its runtime strategy id, fee semantics, venue status, health state, real-funds status, disclosures, as-of metadata, and locked defaults version. Balance is the Day 1 50% USDC / 50% MEW weekly rebalance basket. Hidden strategy toggles and stale variants are not exposed.
| Name | Required | Description | Default |
|---|---|---|---|
| product | No | Filter to one Day 1 public product. Defaults to "all". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a read-only operation ('browse', 'returns'), specifies what data is included (e.g., 'runtime strategy id, fee semantics'), and clarifies limitations ('Hidden strategy toggles and stale variants are not exposed'). It lacks details on permissions or rate limits, but covers essential context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core action ('browse') and resource, followed by scope details and return data. Every sentence adds value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does well by explaining the return data structure and limitations. However, it could be more complete by specifying the output format (e.g., JSON structure) or any error conditions, which would help an agent use it effectively without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds minimal parameter semantics by mentioning the three products ('Earn, Stake, and Balance') and the 'all' default, but does not provide additional meaning beyond what the schema already documents for the 'product' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('browse', 'returns') and resource ('toreva's locked Day 1 strategy delivery contract for Solana'), distinguishing it from siblings like 'toreva_execute' or 'toreva_configure' by focusing on read-only browsing of public strategy contracts rather than configuration or execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool by specifying the scope ('Public scope is exactly three products: Earn, Stake, and Balance') and exclusions ('Hidden strategy toggles and stale variants are not exposed'), but does not explicitly name alternative tools for related tasks like 'toreva_earn' or 'toreva_scan'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!