Syenite
Server Details
DeFi interface for AI agents: swap, bridge, yield, lending. Live at syenite.ai.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- syenite-ai/syenite
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
22 toolsalerts.checkAInspect
Check for active alerts on watched positions. Returns any health factor warnings or critical alerts since last acknowledgment. Call this periodically to stay informed about position health.
| Name | Required | Description | Default |
|---|---|---|---|
| watchId | No | Filter alerts for a specific watch ID | |
| acknowledge | No | Mark returned alerts as acknowledged |
Output Schema
| Name | Required | Description |
|---|---|---|
| alerts | Yes | |
| critical | Yes | |
| warnings | Yes | |
| timestamp | Yes | |
| alertCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description carries burden well: discloses 'since last acknowledgment' temporal filtering (stateful behavior), mentions 'health factor warnings or critical alerts' (severity types), and implies acknowledgment side-effect through parameter linkage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose front-loaded, behavioral constraints in middle, usage guidance at end. Every clause essential.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for complexity: mentions domain-specific 'health factor' and 'watched positions' (prerequisite concept). Relies appropriately on output schema for return format details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds value by explaining the relational semantics between 'acknowledge' parameter and 'since last acknowledgment' filtering behavior, clarifying how the boolean affects future calls.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Check for active alerts on watched positions' provides exact verb, resource, and scope. Implicitly distinguishes from sibling 'alerts.list' by emphasizing 'active' and 'watched positions' (stateful filtering).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides 'Call this periodically to stay informed about position health' which implies polling cadence. However, lacks explicit comparison to 'alerts.list' (which presumably returns all alerts vs. active-only) and no 'when-not-to-use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alerts.listAInspect
List all active position watches and their status. Shows watched addresses, thresholds, and last check times.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| watches | Yes | |
| timestamp | Yes | |
| watchCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, description must carry full burden. It discloses what data is returned ('watched addresses, thresholds, and last check times'), providing useful behavioral preview. However, lacks operational details like caching, rate limits, or explicit read-only declaration.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two crisp sentences with zero waste: first states purpose, second details output content. Appropriately sized for a parameterless list operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple list tool with output schema existing. Description focuses on relevant output fields without needing to fully document return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool accepts zero parameters, warranting baseline score of 4 per rubric. Schema coverage is 100% (empty object), so no parameter semantics needed from description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'List' and resource 'position watches' (clarifying that alerts are position watches). Mentions key output fields. However, does not explicitly differentiate from sibling 'alerts.check' (list vs check distinction is implicit in verbs only).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus 'alerts.check' or prerequisites. Only implied usage is viewing all watches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alerts.removeAInspect
Remove a position watch by its ID. Stops monitoring that address.
| Name | Required | Description | Default |
|---|---|---|---|
| watchId | Yes | ID of the watch to remove |
Output Schema
| Name | Required | Description |
|---|---|---|
| message | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the effect ('stops monitoring'), but fails to clarify whether removal is permanent/irreversible, if historical alert data is preserved, or if there are cascading effects on dependent systems. Adequate but lacks safety context for a destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero filler. The first sentence front-loads the core action and resource, while the second clarifies the behavioral consequence. Every word earns its place; no redundancy with the schema or tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but indicated in context signals), the description appropriately omits return value details. For a single-parameter deletion tool, the description is functionally complete, though a brief note on the permanent nature of the deletion would strengthen it given the lack of destructive annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value beyond the schema—it confirms the ID refers to a 'position watch' (domain context), but does not elaborate on ID format, validation rules, or error states beyond what the schema property description already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Remove') with a clear resource ('position watch'), and explicitly scopes the operation ('by its ID'). It effectively distinguishes from siblings like alerts.watch (creation), alerts.list (enumeration), and alerts.check (status query) through the destructive verb choice.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence ('Stops monitoring that address') implies the tool is for ceasing active monitoring, but there is no explicit guidance on when to use this versus keeping a watch inactive, or prerequisites like requiring an existing watch ID from alerts.list. Usage is implied but not prescribed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alerts.watchAInspect
Register a lending position for continuous monitoring. Sets a health factor threshold — when the position drops below it, alerts are generated. Alerts can be polled via alerts.check. Use this for automated risk management.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain filter | all |
| address | Yes | EVM address to monitor | |
| protocol | No | Protocol filter | all |
| healthFactorThreshold | No | Alert when health factor drops below this value (default 1.5) |
Output Schema
| Name | Required | Description |
|---|---|---|
| usage | Yes | |
| watch | Yes | |
| message | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explains the threshold trigger mechanism ('when the position drops below it, alerts are generated'), but omits mutation semantics (idempotency, whether re-registering overwrites), output format details despite output schema existing, and any rate limit or persistence information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four efficient sentences with zero waste: immediate action (register), mechanism explanation (threshold/trigger), workflow integration (alerts.check), and use case (risk management). Well front-loaded with the core purpose in the first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given rich schema (100% coverage, 4 well-documented params) and existence of output schema. Covers registration purpose and alerting workflow. Minor gap regarding idempotency behavior when registering an already-watched address, which would be relevant for a stateful 'register' operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% ('Chain filter', 'EVM address to monitor', etc.), establishing baseline 3. The description adds minor semantic context for healthFactorThreshold by framing it as setting a risk threshold, but does not add significant meaning beyond what the schema already provides for chain, address, or protocol parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (register) and resource (lending position for continuous monitoring), and explicitly distinguishes from sibling tool alerts.check by stating alerts 'can be polled via alerts.check', clarifying this tool sets up monitoring rather than retrieves alerts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear workflow guidance by naming alerts.check as the polling mechanism and specifies the use case ('automated risk management'). Lacks explicit guidance on when not to use this versus lending.position.monitor or whether duplicate registrations are allowed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find.strategyAInspect
Composable strategy finder — scans yield, carry trades, lending leverage, prediction markets, and gas costs to surface the best opportunities for a given asset and risk profile. The intelligence layer that connects all Syenite data sources. Tell it what you have, and it tells you what to do. Returns ranked strategies with expected return, risk level, execution steps, and which Syenite tools to call next.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | Yes | Asset you have or want to deploy: "ETH", "WETH", "wBTC", "USDC", "USDT", etc. | |
| chain | No | Chain preference | all |
| amount | No | Amount in USD to deploy (for return calculations) | |
| riskTolerance | No | Maximum risk level: low (yield only), medium (carry + arb), high (leverage + prediction) | high |
| includePrediction | No | Include prediction market signals in strategy search |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| query | Yes | |
| summary | Yes | |
| timestamp | Yes | |
| strategies | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully explains the return structure ('ranked strategies with expected return, risk level, execution steps') and orchestration behavior ('which Syenite tools to call next'). Missing only operational details like caching or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: first defines capabilities with specific scan targets, second positions architecturally as the 'intelligence layer', third details return values. The middle sentence has slight marketing flair ('Tell it what you have...') but remains functional. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 5-parameter orchestration tool connecting multiple data sources, the description adequately covers scope (what it scans), value proposition ('connects all Syenite data sources'), and output structure. Since output schema exists, the brief return value summary is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed descriptions for all 5 parameters (asset, chain, amount, riskTolerance, includePrediction). The description references 'asset' and 'risk profile' in context but does not add validation rules, formats, or examples beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verbs ('scans', 'surface', 'connects') and lists exact domains covered (yield, carry trades, lending leverage, prediction markets, gas costs). The phrase 'intelligence layer that connects all Syenite data sources' clearly positions it as an orchestrator versus specialized siblings like yield.opportunities or strategy.carry.screen.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage as an entry point ('Tell it what you have, and it tells you what to do') and mentions it returns 'which Syenite tools to call next', suggesting orchestration behavior. However, lacks explicit 'when not to use' guidance or direct comparisons to specialized alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gas.estimateAInspect
Estimate current gas costs across supported chains (Ethereum, Arbitrum, Base, BNB Chain). Returns gas prices, estimated costs for common operations (transfer, swap, bridge, contract call), and the native token needed. Use this before executing transactions to ensure the wallet has enough native gas, or to pick the cheapest chain for an operation.
| Name | Required | Description | Default |
|---|---|---|---|
| chains | No | Chains to check (defaults to all) | |
| operations | No | Operation types to estimate: transfer, erc20_transfer, swap, bridge, erc20_approve, lending_supply, lending_borrow, contract_register |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| estimates | Yes | |
| timestamp | Yes | |
| chainsQueried | Yes | |
| cheapestChain | Yes | Cheapest chain for each operation type |
| availableOperations | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return values (gas prices, estimated costs, native token needed) and supported chains. However, it omits whether data is real-time vs cached, rate limits, or behavior when chains are offline. Still provides solid behavioral context for a read-only tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each earning its place: sentence 1 defines scope and chains, sentence 2 specifies return values, sentence 3 provides usage timing. No redundancy or filler. Information is front-loaded with the core action in the first clause.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists (not shown but indicated), the description appropriately summarizes return values without over-specifying. All supported chains are explicitly listed, and the optional nature of parameters is implied by 'defaults to all' in schema. Adequate for a 2-parameter read-only estimation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with clear enum values and defaults documented. The description mentions operations (transfer, swap, bridge) but doesn't add parameter syntax details beyond the schema. With full schema coverage, baseline 3 is appropriate as the description doesn't need to compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Estimate' with clear resource 'gas costs' and explicitly lists supported chains (Ethereum, Arbitrum, Base, BNB Chain). It distinguishes from siblings like swap.quote or wallet.balances by focusing purely on gas estimation rather than execution or balances.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'before executing transactions to ensure the wallet has enough native gas, or to pick the cheapest chain for an operation.' This provides clear temporal guidance (pre-transaction) and decision-making criteria (cost comparison) that distinguishes it from execution tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lending.market.overviewAInspect
Get an aggregate overview of DeFi lending markets across Aave v3, Compound V3, Morpho Blue, and Spark on Ethereum, Arbitrum, and Base. Returns per-protocol totals: TVL, total borrowed, utilization ranges, rate ranges, and available liquidity. Supports all collateral types (BTC wrappers, ETH, LSTs). Use this for a high-level view of lending market conditions.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain to query. Defaults to all supported chains. | all |
| collateral | No | Filter by collateral asset, or "all" | all |
Output Schema
| Name | Required | Description |
|---|---|---|
| query | Yes | |
| protocols | Yes | |
| timestamp | Yes | |
| crossProtocol | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively specifies the data sources (Aave, Compound, Morpho, Spark), supported networks (Ethereum, Arbitrum, Base), and return value structure (TVL, borrowed, utilization ranges, etc.). It lacks operational details like caching or rate limits, but covers the essential behavioral scope for a read-only aggregation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: scope declaration, return value specification, and usage guidance. Every sentence earns its place with zero redundancy. Information is front-loaded with the action verb and immediately follows with resource specifics.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema and only 2 simple parameters, the description is appropriately comprehensive. It covers the multi-protocol/multi-chain scope, collateral support, and intended use case. For a moderately complex DeFi aggregation tool, this is sufficient without overwhelming verbosity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds valuable semantic context beyond the schema by specifying supported collateral types ('BTC wrappers, ETH, LSTs') and explicitly listing the protocols and chains, which helps the agent understand the enum values in the 'chain' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Get an aggregate overview of DeFi lending markets across Aave v3, Compound V3, Morpho Blue, and Spark' — a specific verb+resource combination that clearly defines scope. It distinguishes from siblings like 'lending.rates.query' and 'lending.risk.assess' by emphasizing the aggregate/high-level nature of the data returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states 'Use this for a high-level view of lending market conditions,' which provides clear context on when to use the tool. However, it lacks explicit differentiation from sibling tools (e.g., when to choose 'lending.rates.query' vs this overview) or exclusions for when this tool is inappropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lending.position.monitorAInspect
Check the health of any DeFi lending position on Aave v3, Morpho Blue, or Spark across Ethereum, Arbitrum, and Base. Returns current LTV, health factor, liquidation price, distance to liquidation (% price drop needed), borrow rate, and estimated annual cost. Works with any wallet address. Scans all collateral types (BTC wrappers, ETH, LSTs) automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain to query. Defaults to all supported chains. | all |
| address | Yes | EVM address to check (works on all supported chains) | |
| protocol | No | Protocol filter | all |
Output Schema
| Name | Required | Description |
|---|---|---|
| address | Yes | |
| warning | No | Present when positions are approaching liquidation |
| positions | Yes | |
| timestamp | Yes | |
| positionCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses automatic cross-chain scanning, automatic collateral detection (BTC wrappers, ETH, LSTs), and comprehensive return metrics. Missing minor edge case details (behavior if no positions exist) but covers main behavioral traits well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three dense sentences with zero waste. First sentence establishes scope, second details return values, third explains input flexibility and auto-scanning. Perfect front-loading of critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a monitoring tool. Despite output schema existing, description helpfully enumerates specific return metrics (LTV, liquidation price, distance %) and auto-scanning behavior. Complements 100% schema coverage effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds valuable context that address works across all chains (cross-chain compatibility) and implies chain/protocol parameters drive multi-protocol scanning behavior described in first sentence.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with 'Check the health' verb, clear resource (DeFi lending position), and explicit scope listing (Aave v3, Morpho Blue, Spark on Ethereum/Arbitrum/Base). Distinguishes from sibling market.overview by focusing on position-specific health metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clear context for usage (monitoring specific wallet positions vs market data) through phrases like 'Works with any wallet address' and detailed health metrics. Lacks explicit contrast with lending.risk.assess sibling, but scope is inferable from position-specific outputs listed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lending.rates.queryAInspect
Query real-time DeFi lending rates across Aave v3, Compound V3, Fluid, Morpho Blue, and Spark on Ethereum, Arbitrum, and Base. Returns normalized borrow APY, supply APY, available liquidity, utilization, and LTV limits for each market. Supports all major collateral types: BTC wrappers (wBTC, tBTC, cbBTC), ETH and LSTs (WETH, wstETH, rETH, cbETH, weETH). Filter by specific asset, category ("BTC" or "ETH"), chain, or use "all".
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain to query. Defaults to all supported chains. | all |
| collateral | No | Collateral asset or category: "wBTC", "tBTC", "cbBTC", "WETH", "wstETH", "rETH", "cbETH", "weETH", "BTC", "ETH", or "all" | all |
| borrowAsset | No | Asset to borrow: "USDC", "USDT", "DAI", "GHO" | USDC |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| query | Yes | |
| markets | Yes | All matching lending markets, sorted by borrow APY |
| timestamp | Yes | |
| bestBorrowRate | Yes | Best rate found, or null if no markets match |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses what the tool returns (normalized borrow APY, supply APY, liquidity, utilization, LTV limits) and the real-time nature of the data. It lacks operational details like caching or rate limits, but covers the essential behavioral contract for a read-only query tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences with zero waste: purpose declaration, return value specification, supported asset enumeration, and filtering capabilities. Information is front-loaded with the core action, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 optional parameters, 100% schema coverage, and existence of an output schema), the description is appropriately comprehensive. It enumerates specific protocols, chains, collateral types, and return metrics, providing sufficient domain context for correct invocation without over-documenting what the schemas already cover.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although schema coverage is 100%, the description adds valuable semantic context by grouping collateral options into categories (BTC wrappers, ETH and LSTs) and reinforcing the filter capabilities mentioned in the schema. This categorical framing helps the agent understand asset relationships beyond the raw enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the specific action (Query), resource (real-time DeFi lending rates), and scope (Aave v3, Compound V3, Fluid, Morpho Blue, and Spark on Ethereum, Arbitrum, Base). The detailed enumeration of protocols and chains effectively distinguishes this from siblings like lending.market.overview or yield.opportunities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description provides clear context about supported protocols and filters, it offers no explicit guidance on when to use this tool versus siblings like lending.market.overview or lending.risk.assess. Usage must be inferred from the specificity of 'Query real-time DeFi lending rates' and the listed protocols.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lending.risk.assessAInspect
Assess the risk of a proposed DeFi lending position before opening it. Returns risk score (1-10), recommended protocol, liquidation price and penalty, position sizing analysis, collateral risk profile, protocol risk notes (oracle, liquidation mechanics, governance), estimated annual cost, and actionable summary. Supports any collateral asset (wBTC, tBTC, cbBTC, WETH, wstETH, rETH, cbETH, weETH) and borrow asset (USDC, USDT, DAI, GHO).
| Name | Required | Description | Default |
|---|---|---|---|
| protocol | No | Protocol preference | best |
| targetLTV | Yes | Desired LTV percentage (1-99) | |
| collateral | Yes | Collateral asset: "wBTC", "tBTC", "cbBTC", "WETH", "wstETH", "rETH", "cbETH", "weETH" | |
| borrowAsset | No | Asset to borrow | USDC |
| collateralAmount | Yes | Amount of collateral |
Output Schema
| Name | Required | Description |
|---|---|---|
| query | Yes | |
| timestamp | Yes | |
| assessment | Yes | |
| alternativeMarkets | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Lists comprehensive output details (risk score, liquidation mechanics, governance risks) but omits operational behavior: does not state if read-only/simulated, data freshness, or failure modes. Output schema exists so return values don't need full description, but operational traits missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose first, then outputs, then constraints. Three sentences with logical flow. Second sentence is long but information-dense listing return values. No redundant filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a complex risk tool: covers purpose, detailed outputs, and asset constraints. Has output schema (noted in context). Only gap is missing title field (null), which slightly impacts completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds value by explicitly enumerating supported collateral assets (wBTC, tBTC, etc.) and borrow assets (USDC, USDT, DAI, GHO), reinforcing valid parameter values and domain constraints beyond schema strings.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Assess' with specific resource 'risk of a proposed DeFi lending position'. Explicitly distinguished from siblings like lending.position.monitor (existing positions) and lending.rates.query (rates only) by focusing on pre-opening risk analysis of proposed positions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides temporal context 'before opening it' indicating pre-position usage. However, lacks explicit guidance on when to use alternatives like lending.position.monitor (for existing positions) or find.strategy (for strategy discovery) vs. this risk assessment tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prediction.bookAInspect
Get the order book for a specific Polymarket outcome token. Returns top bids/asks, spread, mid-price, and depth. Use for assessing execution quality and market making opportunities. Requires a Polymarket token ID (from the markets returned by prediction.trending or prediction.search).
| Name | Required | Description | Default |
|---|---|---|---|
| tokenId | Yes | Polymarket outcome token ID (from prediction.trending or prediction.search results) |
Output Schema
| Name | Required | Description |
|---|---|---|
| asks | No | |
| bids | No | |
| note | No | |
| error | No | |
| spread | No | |
| tokenId | Yes | |
| midPrice | No | |
| spreadBps | No | Spread in basis points |
| timestamp | Yes | |
| askDepthUSD | No | |
| bidDepthUSD | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes the return structure ('Returns top bids/asks, spread, mid-price, and depth'), implying this is a data retrieval operation. It could be improved by mentioning data freshness or caching behavior, but adequately covers the core behavioral trait of what data is returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: defines the operation, lists return values, specifies use cases, and states prerequisites. Information is front-loaded with the core action in the first sentence, and every sentence earns its place by adding distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has a single required parameter, 100% schema coverage, and an output schema exists (noting context signals), the description is complete. It successfully establishes the Polymarket domain context and explains the return data structure without needing to exhaustively document the output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage and already documents that the tokenId comes from prediction.trending or prediction.search. The description repeats this provenance information without adding additional semantic details about the parameter format or validation rules beyond what the schema provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'Get' and clearly identifies the resource as 'the order book for a specific Polymarket outcome token.' It effectively distinguishes itself from sibling tools prediction.search and prediction.trending by specifying it operates on outcome tokens rather than markets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the tool: 'Use for assessing execution quality and market making opportunities.' It also clearly identifies the prerequisite workflow by stating it requires a token ID from prediction.trending or prediction.search, establishing the correct sequence of tool invocations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prediction.searchAInspect
Search prediction markets on Polymarket by topic. Returns matching markets with probabilities, volume, liquidity, and order book metrics. Good for finding specific events (elections, crypto prices, sports, geopolitics).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results to return | |
| query | Yes | Search query (e.g. 'Bitcoin price', 'US election', 'Fed rate') |
Output Schema
| Name | Required | Description |
|---|---|---|
| query | Yes | |
| events | Yes | |
| source | Yes | |
| timestamp | Yes | |
| resultCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses what data is returned (probabilities, volume, liquidity, order book metrics). This helps agents understand the tool's utility. Missing minor operational details like rate limits or caching, but covers the critical behavioral aspect of return payload.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. First states purpose, second discloses return data, third provides usage examples. Front-loaded with the most critical information (Polymarket search) and avoids redundant elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and 100% input schema coverage, the description appropriately focuses on high-value context: specifying the platform (Polymarket), return data fields, and usage domains. Could explicitly mention this is read-only to contrast with prediction.book, but otherwise complete for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds the framing 'by topic' which aligns with the query parameter's purpose, but doesn't add syntax details, validation rules, or semantics beyond what the well-documented schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb (search), resource (prediction markets on Polymarket), and scope (by topic). It implicitly distinguishes from siblings like prediction.book (trading), prediction.signals (alerts), and prediction.trending (browsing) through its search-focused functional description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete usage examples (elections, crypto prices, sports, geopolitics) that clarify when to use the tool. While it doesn't explicitly name alternatives or exclusions, the examples effectively signal this is for topic-specific lookups versus browsing trending markets.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prediction.signalsAInspect
Detect actionable signals across prediction markets — volume spikes, wide spreads, extreme probabilities, and high-conviction opportunities. Scans Polymarket for markets where agents can profit: market making on wide spreads, fading extremes, or riding momentum. Returns ranked signals with signal type, strength, and suggested action. Use this for autonomous prediction market strategy discovery.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum signals to return | |
| types | No | Filter to specific signal types (defaults to all) | |
| minStrength | No | Minimum signal strength (0-100) to include |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| source | Yes | |
| signals | Yes | Ranked signals — each has type, strength, market, question, action, and signal-specific data |
| timestamp | Yes | |
| typeCounts | Yes | |
| signalCount | Yes | |
| signalTypes | Yes | Legend: signal type → description |
| marketsScanned | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It effectively describes the data source (Polymarket), return format (ranked signals with type, strength, suggested action), and practical application (profit opportunities). Could improve by mentioning caching behavior, rate limits, or real-time vs batched data, but covers core behavioral traits well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste. Front-loaded with core purpose ('Detect actionable signals'), followed by data source, return value description, and usage guidance. Every sentence earns its place with dense, actionable information. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for complexity (3 optional params, 100% schema coverage, output schema exists). Description summarizes return values ('ranked signals with signal type, strength, and suggested action') without needing to duplicate the full output schema. Covers what the tool does, what it returns, and when to use it, satisfying all completeness requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (limit, types, minStrength), establishing baseline 3. Description adds value by mapping the abstract 'types' parameter to concrete trading concepts mentioned in the text (wide spreads, extreme probabilities, high-conviction opportunities), helping agents understand the semantic meaning of filter options beyond the enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verbs (detect, scans) and clearly identifies the resource (Polymarket prediction markets). It distinguishes from siblings like prediction.search or prediction.trending by focusing on 'actionable signals' for profit opportunities rather than general market data, and specifies unique signal types (volume spikes, wide spreads, extreme probabilities).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use this for autonomous prediction market strategy discovery' and describes specific trading approaches (market making on wide spreads, fading extremes, riding momentum). However, it does not explicitly mention when NOT to use it or name specific sibling alternatives like find.strategy or prediction.search for different use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prediction.trendingAInspect
Get trending prediction markets on Polymarket, ranked by volume. Returns market titles, current probabilities (outcome prices), volume, liquidity, and spread. Use this for discovering active markets and identifying trading opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of trending markets to return (max 25) |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| events | Yes | |
| source | Yes | |
| timestamp | Yes | |
| eventCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully specifies the ranking logic (by volume) and return payload contents (titles, probabilities, volume, liquidity, spread), but omits safety/permission context (e.g., whether authentication is required) or operational constraints like rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences with zero redundancy: purpose/scope, return values, and usage guidance. Each sentence earns its place without extraneous verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for the tool's low complexity (1 optional parameter, 100% schema coverage, output schema present). It summarizes the return values effectively without needing to duplicate full output schema details, though it could briefly mention the limit parameter's existence.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for the single 'limit' parameter. The description adds no explicit parameter details, which aligns with the baseline score of 3 for high-coverage schemas where the structured documentation suffices.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') and resource ('trending prediction markets on Polymarket') with specific ranking logic ('ranked by volume'). However, it lacks explicit differentiation from siblings like prediction.search or prediction.signals, which would be needed for a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context with 'Use this for discovering active markets and identifying trading opportunities,' indicating appropriate scenarios. However, it lacks explicit exclusions or named alternatives (e.g., when to use prediction.search instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
strategy.carry.screenAInspect
Screen for positive carry strategies across all DeFi lending markets. Calculates net carry = collateral supply APY - borrow APY for every market. Returns strategies ranked by carry, including net annual return on a given position size. Positive carry means you earn more on deposited collateral than you pay to borrow — a self-funding leveraged position. Use this to discover the best borrow-to-yield strategies for autonomous agents.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain filter | all |
| minCarry | No | Minimum net carry % to include (e.g. 0 for positive-only) | |
| collateral | No | Collateral filter: specific asset, "BTC", "ETH", or "all" | all |
| borrowAsset | No | Borrow asset | USDC |
| positionSizeUSD | No | Position size in USD for annual return calculation |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| query | Yes | |
| summary | Yes | |
| timestamp | Yes | |
| strategies | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It excellently discloses the calculation methodology (net carry formula) and financial mechanics ('self-funding leveraged position'), but omits system-level behaviors like auth requirements, rate limits, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences with zero waste: front-loaded with the action, followed by calculation logic, return format, business concept definition, and usage guidance. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists (has_output_schema: true) and input schema is fully documented, the description appropriately focuses on domain logic (carry trade explanation). Adequate for complexity, though could mention data staleness or market coverage limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('Chain filter', 'Minimum net carry %', etc.), establishing baseline 3. Description references 'position size' and implies filtering but does not add syntax details or semantic nuance beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states exact action ('Screen for positive carry strategies'), defines the resource ('DeFi lending markets'), and distinguishes from siblings via the unique 'carry' calculation (collateral supply APY - borrow APY) and 'autonomous agents' use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear when-to-use context ('Use this to discover the best borrow-to-yield strategies for autonomous agents'), but lacks explicit when-not-to-use guidance or direct comparison to siblings like find.strategy or yield.opportunities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
swap.multiAInspect
Batch multiple swap or bridge quotes in a single call. Fetches all quotes in parallel and returns them together. Useful for splitting funds across chains, multi-leg rebalancing, or comparing routes side-by-side. Each request in the batch uses the same parameters as swap.quote. Returns individual results (with errors per-item if any fail) and a summary of total costs.
| Name | Required | Description | Default |
|---|---|---|---|
| requests | Yes | Array of swap/bridge requests to quote in parallel |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| quotes | Yes | Individual quote results — each has index, status, and full quote data or error |
| timestamp | Yes | |
| totalCosts | Yes | |
| failedCount | Yes | |
| requestCount | Yes | |
| successCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Covers parallel execution, per-item error handling, and return structure (individual results + cost summary). Missing explicit read-only/safety declaration.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences with zero waste: purpose, mechanism, use cases, parameter reference, and return values. Well front-loaded and appropriately sized for complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for batch operation complexity. Acknowledges output schema exists by summarizing return values (individual results, errors, summary) without over-specifying. Could mention the 10-request limit explicitly, though schema covers it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds value by referencing swap.quote parameter structure ('Each request... uses the same parameters as swap.quote'), avoiding duplication while explaining the nested request objects.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb+resource ('Batch multiple swap or bridge quotes') clearly states what it does. Distinguishes from sibling swap.quote by emphasizing 'batch' and 'parallel' execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific use cases ('splitting funds across chains, multi-leg rebalancing, comparing routes'). References sibling swap.quote for single-quote operations, implicitly guiding tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
swap.quoteAInspect
Get an optimal swap or bridge quote with unsigned transaction calldata. Supports same-chain swaps and cross-chain bridges across 30+ chains via aggregated routing (1inch, 0x, Paraswap, and more). Returns the best route, expected output, fee breakdown, and an unsigned transaction ready to sign. The agent or user handles signing — Syenite never touches private keys. For cross-chain transfers, this handles bridging automatically — no separate bridge step needed.
| Name | Required | Description | Default |
|---|---|---|---|
| order | No | Route preference: CHEAPEST (best price) or FASTEST (lowest execution time) | CHEAPEST |
| toChain | No | Destination chain (defaults to fromChain). Set differently for cross-chain bridges. | |
| toToken | Yes | Token to buy — symbol (e.g. "WETH", "USDT") or contract address | |
| slippage | No | Max slippage as decimal (0.005 = 0.5%) | |
| fromChain | No | Source chain: "ethereum", "arbitrum", "optimism", "base", "polygon", "bsc", "avalanche", or chain ID | ethereum |
| fromToken | Yes | Token to sell — symbol (e.g. "USDC", "ETH") or contract address | |
| toAddress | No | Recipient address (defaults to fromAddress) | |
| fromAmount | Yes | Amount to swap in smallest unit (e.g. 1000000 for 1 USDC with 6 decimals). Use token decimals. | |
| fromAddress | Yes | Sender wallet address (used for routing and approval checks) |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| type | Yes | |
| costs | Yes | |
| quote | Yes | |
| route | Yes | Ordered routing steps |
| summary | Yes | Human-readable summary, e.g. '1.5 WETH → 3500 USDC' |
| tracking | No | Present for cross-chain bridges — use swap.status to track |
| warnings | No | Risk warnings (slippage, gas, cross-chain) |
| execution | Yes | |
| estimatedTime | Yes | |
| approvalRequired | No | Present when token approval is needed before the swap |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and succeeds in disclosing critical behavioral traits: the security model ('Syenite never touches private keys'), the aggregated routing implementation ('1inch, 0x, Paraswap'), and the output state ('unsigned transaction ready to sign'). Minor gap: doesn't clarify if this simulates/reads state or has rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences efficiently cover: purpose, scope/capabilities, return values/security, and cross-chain usage. No redundancy or filler. Information is front-loaded with the core action in sentence one.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given the complexity (9 params, cross-chain) and presence of output schema. Covers routing logic, security boundaries, and bridging behavior. Does not detail error scenarios or retry behavior, but sufficiently prepares the agent for successful invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with clear semantics for all 9 parameters (e.g., 'smallest unit' for amounts, chain examples). The description does not duplicate parameter documentation but instead focuses on high-level behavior, which is appropriate given the complete schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific action ('Get an optimal swap or bridge quote') and clearly identifies the resource (unsigned transaction calldata). It distinguishes itself from siblings like swap.status (which checks status) and swap.multi (implied by singular 'swap') by emphasizing it produces a single quote ready for signing, not execution or batching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance for cross-chain scenarios ('handles bridging automatically — no separate bridge step needed'), implicitly directing users to this tool for cross-chain needs versus same-chain alternatives. However, it lacks explicit differentiation from swap.multi or guidance on when to prefer gas.estimate or other related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
swap.statusAInspect
Track execution status of a swap or cross-chain bridge transaction. Returns current status (PENDING, DONE, FAILED), receiving transaction hash, and amount received. Useful for monitoring cross-chain bridges where execution is not instant.
| Name | Required | Description | Default |
|---|---|---|---|
| txHash | Yes | Transaction hash of the submitted swap/bridge | |
| toChain | No | Destination chain (for cross-chain bridges) | |
| fromChain | No | Chain where the transaction was submitted | ethereum |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| bridge | No | |
| status | Yes | |
| message | Yes | Human-readable status explanation |
| toChain | Yes | |
| fromChain | Yes | |
| substatus | No | |
| sendingTxHash | Yes | |
| amountReceived | No | Present when transfer is complete |
| receivingTxHash | No | Present when destination tx is confirmed |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully enumerates specific behavioral states (PENDING, DONE, FAILED) and return values. It implies read-only polling behavior but omits rate limits or error handling specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences with zero waste: sentence 1 states purpose, sentence 2 enumerates return values, and sentence 3 provides usage context. Information is appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete given the richness of input schema (100% coverage) and existence of output schema. The description wisely enumerates status enum values. Minor gap on operational details (rate limits, error states) keeps it from a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (txHash, toChain, fromChain all documented), establishing a baseline of 3. The description contextualizes these parameters by mentioning 'cross-chain bridge' domain but does not add syntax details or format specifications beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Track') with a clear resource ('execution status of a swap or cross-chain bridge transaction'). It effectively distinguishes from siblings like 'swap.quote' (pricing) and 'swap.multi' (execution) by focusing on monitoring post-submission state.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on when to use the tool ('Useful for monitoring cross-chain bridges where execution is not instant'), indicating it's for asynchronous operations. Lacks explicit 'when-not-to-use' or named alternatives, preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
syenite.helpAInspect
Get information about Syenite — the DeFi interface for AI agents. Swap/bridge routing, yield intelligence, lending data, risk assessment, and position monitoring. Call this tool to learn what tools are available and how to use them.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| tools | Yes | |
| access | Yes | |
| service | Yes | |
| website | Yes | |
| description | Yes | |
| yieldSources | Yes | |
| swapAndBridge | Yes | |
| lendingProtocols | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It clarifies the functional behavior (returns information about available tools) but omits operational traits like idempotency, caching behavior, or rate limits that would help an agent understand invocation constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences: the first establishes domain context (Syenite's capabilities), the second states the tool's specific purpose. Every sentence earns its place with zero redundancy or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a simple help tool with an output schema (handling return values) and no input parameters, the description successfully covers the necessary context: what Syenite is, what it does, and how this tool serves as the discovery mechanism for the sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per evaluation guidelines, zero-parameter tools receive a baseline score of 4, as there are no parameter semantics to document beyond the schema itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Get information') and clearly identifies the resource (Syenite/the DeFi interface). It effectively distinguishes itself from the 20+ operational siblings (swap.*, lending.*, etc.) by positioning itself as the meta/documentation tool for understanding the system.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to call ('to learn what tools are available and how to use them'), providing clear entry-point guidance. However, it does not explicitly state when NOT to use it or name specific alternative tools from the sibling list, stopping short of a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet.balancesAInspect
Check native and token balances for any EVM address across supported chains (Ethereum, Arbitrum, Base, BNB Chain). Returns native gas token balance, common stablecoin balances, and USD-equivalent totals per chain. Use this to verify an address has sufficient funds before executing swaps, bridges, or on-chain operations.
| Name | Required | Description | Default |
|---|---|---|---|
| chains | No | Chains to query (defaults to all: ethereum, arbitrum, base, bsc) | |
| address | Yes | EVM address to check |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| address | Yes | |
| balances | Yes | |
| timestamp | Yes | |
| chainsQueried | Yes | |
| hasAnyBalance | Yes | True if any chain has a non-zero balance |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return behavior comprehensively: 'Returns native gas token balance, common stablecoin balances, and USD-equivalent totals per chain.' However, it could explicitly state this is a read-only/safe operation since no readOnlyHint annotation exists to declare safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste: Sentence 1 defines the action and scope, Sentence 2 details return values, Sentence 3 provides usage context. Front-loaded with the core function 'Check native and token balances'. No redundant information or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for complexity: With output schema present, description provides sufficient high-level summary of returns without over-specifying. Covers the essential pre-transaction validation use case given the web3 context. Minor gap: could note whether it accepts ENS names or only raw addresses, but this is not critical given schema clarity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (baseline 3). Description adds value by mapping technical chain identifiers to user-friendly names ('Ethereum, Arbitrum, Base, BNB Chain' matching the enum) and clarifying that 'address' refers to any EVM address (implying both EOA and contract support), enriching the raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with clear resource 'native and token balances' and scope 'any EVM address across supported chains'. Explicitly lists supported chains (Ethereum, Arbitrum, Base, BNB Chain) distinguishing it from non-EVM balance tools and execution-focused siblings like swap.multi or lending.position.monitor.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use this to verify an address has sufficient funds before executing swaps, bridges, or on-chain operations.' This provides clear temporal context (before execution) and distinguishes from sibling execution tools (swap.*, lending.*) by establishing it as a prerequisite check rather than an action.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
yield.assessAInspect
Assess the risk of a specific DeFi yield strategy before committing capital. Returns risk breakdown: smart contract risk, oracle dependency, depeg/peg risk, liquidity/exit risk, position sizing vs TVL, protocol governance, and comparable alternatives. Use this after yield.opportunities to evaluate a specific opportunity in depth.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | No | Asset context for finding alternatives | all |
| amount | No | Amount in USD to deposit (optional, enables position sizing analysis) | |
| product | No | Specific product name (optional, helps match the right source) | |
| protocol | Yes | Protocol to assess: "Aave", "Lido", "Morpho", "Ethena", "Yearn", "Maker", "Rocket Pool", "Coinbase" |
Output Schema
| Name | Required | Description |
|---|---|---|
| query | Yes | |
| source | Yes | |
| timestamp | Yes | |
| alternatives | Yes | |
| riskAssessment | Yes | |
| projectedReturn | No | Present when an amount was provided |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It substantially delivers by enumerating specific risk dimensions analyzed (smart contract, oracle, depeg, liquidity, governance, alternatives). Minor gap: lacks explicit operational notes (auth requirements, rate limits) though 'returns' implies read-only safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences. Front-loaded with core action ('Assess the risk...'). Second sentence efficiently enumerates return value components. Third establishes workflow. Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given presence of output schema and 100% input schema coverage, description appropriately focuses on analytical scope (risk categories) and usage context rather than return format. Complete for a 4-parameter assessment tool, though could note error behavior for invalid protocols.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (all 4 parameters well-documented in schema), establishing baseline of 3. Description does not add parameter-specific semantics (e.g., input formats, examples) but none are required given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb ('Assess'), resource ('risk of a specific DeFi yield strategy'), and scope (before committing capital). Clearly distinguishes from sibling 'yield.opportunities' (discovery vs. risk evaluation) and 'lending.risk.assess' (lending vs. yield farming).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit sequencing guidance ('Use this after yield.opportunities') establishes workflow relationship with sibling tool. Implies when NOT to use (not for initial discovery) and establishes depth purpose ('evaluate... in depth').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
yield.opportunitiesAInspect
Find the best DeFi yield opportunities for any asset across blue-chip protocols on Ethereum. Aggregates yields from lending supply (Aave, Morpho, Spark), liquid staking (Lido, Rocket Pool, Coinbase), savings rates (Maker DSR/sDAI), vaults (MetaMorpho, Yearn), and basis capture (Ethena sUSDe). Returns opportunities ranked by APY with risk level, TVL, lockup period, and protocol details. Filter by asset, category, or risk tolerance.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | No | Asset to find yield for: "ETH", "USDC", "DAI", "WETH", "USDe", "stables", or "all" | all |
| category | No | Yield category filter: "lending-supply", "liquid-staking", "vault", "savings-rate", "basis-capture", or "all" | all |
| riskTolerance | No | Maximum risk level to show: "low", "medium", or "high" (default, shows all) | high |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| query | Yes | |
| summary | Yes | |
| timestamp | Yes | |
| opportunities | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and successfully discloses ranking behavior ('ranked by APY'), return value structure ('risk level, TVL, lockup period'), and aggregation scope (specific protocols named). Lacks operational details like data freshness or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded action, followed by source details, return format, and filtering. Information-dense but efficient. Minor redundancy between final filter sentence and parameter schema, though the protocol enumeration provides necessary specificity for a multi-source aggregator.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a 3-parameter aggregation tool with existing output schema. Covers scope, ranking logic, data sources, and return fields. Could note that all parameters are optional (evident from schema defaults but not explicit in text).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with clear enum values documented. Description confirms filtering capability ('Filter by asset, category, or risk tolerance') but adds no semantic depth beyond the schema's clear parameter definitions. Appropriate baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Find' and resource 'DeFi yield opportunities' with clear scope: 'blue-chip protocols on Ethereum'. Lists specific categories (lending supply, liquid staking, etc.) that implicitly distinguish it from specialized siblings like lending.rates.query or yield.assess.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through comprehensive category listing (suggesting use when comparing across lending/staking/vaults), but lacks explicit 'when to use vs alternatives' guidance or contrast with siblings like yield.assess or find.strategy.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!