euler-mcp
Server Details
Euler - 39 tools for lending rates, supply, and borrow data
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- junct-bot/euler-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 39 of 39 tools scored. Lowest: 2.1/5.
The tool set has significant overlap and ambiguity, particularly between event tools (e.g., Borrow vs. RequestBorrow, Deposit vs. RequestDeposit) and between similar events (e.g., various GovSet* events). While descriptions clarify these are events versus actions, the naming alone makes it difficult for an agent to distinguish their purposes without deep domain knowledge. Many tools appear to serve similar logging/filtering functions with only subtle differences in indexed fields.
Naming is mixed but somewhat readable. Most event tools use PascalCase (e.g., AssetStatus, Borrow), while a few use camelCase (e.g., moduleIdToImplementation, moduleIdToProxy) or lowercase (e.g., dispatch, name). There's no consistent verb_noun pattern; events are named as nouns or past-tense verbs, and read/write functions use generic terms. This inconsistency can confuse agents but isn't chaotic.
With 39 tools, the count is excessive for the apparent scope of monitoring contract events and basic interactions. Many tools are highly specialized events (e.g., GovSetAssetConfig, InstallerSetGovernorAdmin) that could be consolidated or parameterized. This bloated set will overwhelm agents and increase misselection risk, indicating poor scoping.
For a contract monitoring server, the surface is fairly complete, covering a wide range of events (e.g., deposits, borrows, liquidations, governance actions) and including key read functions (e.g., name, module lookups) and a write function (dispatch). However, there are minor gaps, such as missing tools for common contract queries like balance checks or detailed state reads, which agents might need for full workflow coverage.
Available Tools
39 toolsAssetStatusCInspect
Event emitted by the contract. Indexed fields (filterable): underlying. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| poolSize | Yes | uint256 (uint256, pass as decimal string). | |
| timestamp | Yes | uint256 (uint256, pass as decimal string). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| interestRate | Yes | int96 (int96, pass as decimal string). | |
| totalBorrows | Yes | uint256 (uint256, pass as decimal string). | |
| totalBalances | Yes | uint256 (uint256, pass as decimal string). | |
| reserveBalance | Yes | uint96 (uint96, pass as decimal string). | |
| interestAccumulator | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It correctly identifies the event nature and indexed fields, but omits critical execution details: whether this queries historical logs or establishes a real-time subscription, return format, or buffering behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at three short sentences with minimal waste. Information is front-loaded with the event identity. However, the first sentence uses passive voice ('Event emitted') rather than an active verb describing the tool's action, slightly reducing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 required parameters and no output schema, the description adequately identifies the event type and subscription method but lacks explanation of the return structure, pagination behavior for historical queries, or the relationship between input parameters (filters) and returned event data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds semantic context that these are event parameters, but redundantly notes that 'underlying' is indexed/filterable when the schema already suffixes this parameter with '(indexed)'. It does not explain the business logic of fields like 'interestAccumulator' or 'reserveBalance'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource as a contract event and mentions subscription, but lacks a specific action verb (e.g., 'Query', 'Subscribe to', 'Fetch') clarifying what the tool actually does. It states what the data represents rather than how the tool operates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a minimal usage hint ('Subscribe via log filters') but offers no guidance on when to use this versus siblings like MarketActivated or ProxyCreated, nor does it specify prerequisites for the subscription.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
BorrowAInspect
Event emitted by the contract. Indexed fields (filterable): underlying, account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of revealing this is a subscription-based listening operation rather than a state-changing transaction. It successfully conveys this is event-related, but omits details about return format (event payload structure), subscription lifecycle, or whether historical logs are also retrievable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences: identifies the event nature, specifies indexed/filterable parameters, and states the subscription mechanism. No redundancy with the structured schema data.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema or annotations, the description adequately covers the input parameters and subscription mechanism but fails to describe what data structure or stream the agent receives upon successful subscription, leaving a gap in contextual completeness for a subscription tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds semantic value by explicitly connecting 'indexed' (in schema) to 'filterable' (usage implication) and highlighting which specific parameters support filtering, though it doesn't add syntax examples or value constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as an 'Event emitted by the contract' with specific indexed fields, distinguishing it from the sibling 'RequestBorrow' (likely a transaction function) and other action-oriented tools like 'Deposit' or 'Withdraw'. The verb 'Subscribe' combined with 'log filters' specifies the mechanism.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states 'Subscribe via log filters' indicating the general usage pattern, but lacks explicit guidance on when to use this event subscription versus calling 'RequestBorrow' or querying historical data versus subscribing to future events.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
DelegateAverageLiquidityCInspect
Event emitted by the contract. Indexed fields (filterable): account, delegate. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| delegate | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It correctly discloses that fields are indexed and filterable, which is crucial for log filtering. However, it omits what triggers this event, what data it returns (no output schema exists), or whether the subscription is persistent/unsubscribable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The three-sentence structure is efficient and front-loaded: identifies the tool type, lists key parameters with their properties, and states the subscription mechanism. No redundancy or wasted text, though slightly more domain context could have replaced the generic 'Event emitted by the contract'.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and complex DeFi domain, the description adequately covers the input parameters but lacks explanation of the event's business logic (what delegation means) and subscription behavior. It is minimally viable but leaves significant gaps for an AI agent trying to select between similar liquidity tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters fully documented as Ethereum addresses. The description mentions 'account, delegate' as indexed fields, which overlaps with schema information but confirms their filterable nature. With complete schema coverage, this meets the baseline expectation without adding significant additional semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an event subscription tool ('Event emitted by the contract', 'Subscribe via log filters') and lists the indexed fields. However, it fails to explain what 'DelegateAverageLiquidity' means functionally (e.g., delegating liquidity tracking to another address) or how it differs from siblings like TrackAverageLiquidity and UnTrackAverageLiquidity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it mentions subscribing via log filters, there is no guidance on when to use this tool versus alternatives like TrackAverageLiquidity, nor does it mention prerequisites (e.g., whether the account must already track liquidity before delegating) or lifecycle management.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
DepositAInspect
Event emitted by the contract. Indexed fields (filterable): underlying, account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully identifies this as a subscription-based event listener (not a transaction) and notes filterable fields, but omits details about real-time vs historical querying, output event format, or subscription lifecycle management.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is efficiently front-loaded: first establishing the nature of the tool (event), second providing technical implementation details (indexed fields, subscription method). No redundant or wasted language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, the description should ideally explain what the returned event data contains or how subscription updates are delivered. It covers the input side adequately but leaves the output/behavioral contract underspecified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema provides 100% type coverage, the description adds valuable semantic context by explicitly noting that 'underlying' and 'account' are the filterable/indexed fields, guiding the user on which parameters constrain the log filter versus which (amount) must be matched exactly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as an 'Event emitted by the contract' and specifies the resource (Deposit event), distinguishing it from action-oriented siblings like 'RequestDeposit' or 'Borrow'. However, it lacks specificity about what asset type is being deposited (e.g., tokens, ETH) which would strengthen context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides basic usage instruction ('Subscribe via log filters') and identifies which fields are indexed/filterable. However, it lacks explicit guidance on when to use this event subscription versus querying historical logs, or how it relates to the 'RequestDeposit' action flow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dispatchBInspect
Call dispatch(). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden and partially succeeds by disclosing access control requirements and absence of return values. However, it fails to describe what state changes occur when dispatching or what the function actually executes within the protocol.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences are reasonably efficient, though the initial 'Call dispatch()' is redundant with the tool name. Information is front-loaded with the functional classification as a write function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations, the description provides minimum viable safety warnings (auth, no return) but omits critical context about the operation's business purpose, side effects, or relationship to the apparent request/dispatch pattern seen in sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, establishing a baseline of 4. The description does not need to elaborate on parameters that do not exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies as a write function and references the contract function 'dispatch()', but fails to explain what is being dispatched (e.g., queued transactions, deferred operations) or its role in the protocol. The phrase 'Call dispatch()' is somewhat tautological given the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides prerequisite guidance by warning about access control restrictions (e.g., onlyOwner) and suggesting contract source verification. However, it lacks explicit guidance on when to use this versus the many sibling request/execution tools (e.g., RequestBorrow vs Borrow).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
EnterMarketCInspect
Event emitted by the contract. Indexed fields (filterable): underlying, account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It correctly discloses this is a subscription mechanism via log filters and identifies indexed/filterable fields, but omits business semantics (what entering a market means), rate limits, or authentication requirements typical of blockchain log filtering.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three terse sentences efficiently convey the nature (event), key fields, and mechanism (subscribe). No redundant phrases, though 'Event emitted by the contract' could be more active ('Subscribe to EnterMarket events').
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks critical business context for an event subscription tool: it does not describe what the event represents (user enabling an asset as collateral), return format (log entry structure), or how this relates to the broader protocol workflow visible in sibling tools like Deposit/Borrow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with clear descriptions (Ethereum addresses, indexed). The description adds marginal value noting these are 'filterable,' but largely restates what the schema already documents. Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an event subscription tool with filterable indexed fields, but fails to explain what 'EnterMarket' signifies in the business context (entering a market to enable collateral) or distinguish it from sibling ExitMarket.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to subscribe to this event versus other events (e.g., ExitMarket), nor does it explain use cases like 'monitor when users enter markets' or prerequisites for using log filters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ExitMarketBInspect
Event emitted by the contract. Indexed fields (filterable): underlying, account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the disclosure burden. It correctly identifies the indexed/filterable nature of parameters and implies subscription capability. However, it lacks details on return format (event logs vs subscription handle), whether historical queries are supported, or rate limiting concerns typical for log filters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three tightly focused sentences with no redundancy. Each sentence serves a distinct purpose: identifying the event type, listing filterable parameters, and indicating subscription methodology. The structure is front-loaded with the event classification.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter event tool with complete schema coverage, the description adequately covers basic subscription mechanics but lacks domain context (what market exit signifies) and behavioral specifics (block range handling, return structure). It meets minimum viability but leaves operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by explicitly grouping 'underlying' and 'account' as 'Indexed fields (filterable)', reinforcing their queryable nature beyond the schema's individual '(indexed)' tags. This semantic grouping helps agents understand filter construction.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an event emitted by the contract and mentions subscription via log filters, which partially clarifies the tool's purpose. However, it fails to explain what 'exiting a market' means functionally (e.g., disabling collateral, withdrawing positions) or how this tool relates to the EnterMarket sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance with only 'Subscribe via log filters' implying real-time monitoring. There is no explanation of when to monitor this event versus other lifecycle events (like EnterMarket, Deposit, Withdraw) nor prerequisites for subscription.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GenesisCInspect
Event emitted by the contract. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full disclosure burden. It states this is an event but omits critical behavioral context: when the event fires (one-time at deployment vs recurring), what data it returns, and whether the subscription is persistent or polling-based. 'Genesis' implies specific lifecycle timing that is not explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at two sentences with no redundancy. However, the brevity crosses into under-specification given the lack of annotations and output schema—additional context would be warranted for an event subscription tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an event subscription tool with no output schema, the description inadequately explains what the Genesis event signifies within this DeFi protocol context (likely contract/pool initialization), what fields the event contains, or how to handle the subscription lifecycle. Sibling tools suggest complex financial operations where event semantics are crucial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present with 100% schema description coverage. Baseline score of 4 applies as there are no parameters requiring semantic clarification beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an event subscription tool ('Subscribe via log filters'), but fails to clarify what the 'Genesis' event specifically represents (e.g., contract initialization, pool creation) or how it differs from sibling event tools like MarketActivated or ProxyCreated. The verb and resource are present but underspecified.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions the subscription mechanism ('log filters') but provides no guidance on when to use this event versus other event siblings, what triggers it, or prerequisites for subscription. No alternatives or exclusions are documented.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GovConvertReservesAInspect
[DISCOVERY] Event emitted by the contract. Indexed fields (filterable): underlying, recipient. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| recipient | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully identifies this as a read-only subscription operation ('Subscribe via log filters') and notes the indexed fields, but omits details about subscription lifecycle (persistent vs one-shot), rate limits, authentication requirements, or return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences with minimal waste. Each sentence serves distinct purposes: identifying the event type, noting filterable fields, and stating the subscription mechanism. Minor deduction for the '[DISCOVERY]' prefix which is somewhat opaque without additional context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 simple parameters (addresses/uint256), no output schema, and no annotations, the description adequately covers the subscription mechanism. However, it lacks domain completeness regarding the event's business significance (when/why reserves are converted) and the subscription's behavioral characteristics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), the description adds valuable semantic context by explicitly highlighting that 'underlying' and 'recipient' are 'Indexed fields (filterable)'. This emphasizes the filterability aspect beyond the schema's type declarations, aiding parameter discovery for query construction.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's purpose as subscribing to a specific contract event ('GovConvertReserves') via log filters, using specific verbs ('Subscribe') and the resource ('Event emitted by the contract'). It distinguishes itself from action-oriented siblings (e.g., Borrow, Deposit) by explicitly labeling itself as an event subscription, though it lacks domain context explaining what 'Convert Reserves' signifies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance by identifying which parameters are indexed and filterable ('underlying, recipient'), helping users understand filtering capabilities. However, it lacks explicit guidance on when to monitor this specific governance event versus other Gov* event siblings or what conditions trigger this event.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GovSetAssetConfigCInspect
Event emitted by the contract. Indexed fields (filterable): underlying. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| newConfig | Yes | tuple. Fields: eTokenAddress, borrowIsolated, collateralFactor, borrowFactor, twapWindow. | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions subscription but doesn't disclose whether this creates a persistent subscription, what the output format is, how long the subscription lasts, or side effects. Missing critical behavioral details for a log subscription tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences with zero waste. Each clause earns its place: declaring the event nature, identifying filterable fields, and stating the subscription mechanism. Appropriately front-loaded for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the nested object complexity (newConfig with 5 sub-fields) and absence of output schema, the description should explain the event's business impact (what asset configuration changes mean) and return data structure. It omits both, leaving significant gaps in contextual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed param descriptions. The description adds valuable semantic mapping by explicitly stating that 'underlying' is filterable, bridging the blockchain concept 'indexed' (in schema) to the user action 'filterable'. This compensates slightly for the high schema coverage baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as a blockchain event and mentions filterable fields, but fails to explain the governance domain meaning—what 'asset config' represents or when this event fires. It distinguishes from action-oriented siblings (like Borrow) by labeling it an event, but doesn't clarify the semantic significance of the configuration changes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides only a minimal hint ('Subscribe via log filters') but lacks explicit guidance on when to subscribe vs. query historically, or how this relates to other governance event siblings like GovSetIRM or GovSetPricingConfig. No mention of prerequisites or filtering strategies for the newConfig parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GovSetIRMBInspect
Event emitted by the contract. Indexed fields (filterable): underlying. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| resetParams | Yes | bytes (hex-encoded bytes, 0x-prefixed). | |
| interestRateModel | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It correctly identifies this as an event (read-only observation) and notes the indexed/filterable nature of the 'underlying' parameter, but omits what the event signifies (what system state changed), whethersubscription is push or pull, or typical trigger conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise (19 words) with no redundancy. Information is front-loaded ('Event emitted'), though the brevity constrains completeness. No filler words, but the technical shorthand ('Indexed fields (filterable)') assumes blockchain expertise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter event subscription tool with full schema coverage, the description covers basic mechanics (subscription, indexing) but lacks critical context: what the IRM (Interest Rate Model) is, what governance action triggers this event, or why an agent should monitor it. The 'resetParams' parameter's purpose in filtering remains unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100% (baseline 3), the description adds crucial context that 'underlying' is indexed and filterable, and implies the params are used for log filtering. This semantic hint about indexed event fields adds value beyond the raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this monitors an 'Event emitted by the contract' and mentions the indexed field 'underlying', but fails to explain what GovSetIRM actually represents (e.g., governance setting of Interest Rate Model) or when this event is emitted. It names the technical pattern (event) without semantic meaning.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Only provides 'Subscribe via log filters' which implies usage but offers no explicit guidance on when to subscribe to these events versus querying historical logs or using other governance monitoring tools. No mention of why an agent would need to track this specific event.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GovSetPricingConfigCInspect
Event emitted by the contract. Indexed fields (filterable): underlying. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| newPricingType | Yes | uint16 (uint16, pass as decimal string). | |
| newPricingParameter | Yes | uint32 (uint32, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full behavioral disclosure burden. It mentions 'Indexed fields (filterable): underlying' which adds useful indexing context, but fails to clarify whether this returns historical events, subscribes to future events, or executes a transaction. No disclosure of access control requirements, gas costs, side effects on the pricing model, or the meaning of pricingType/pricingParameter values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (two sentences), but the brevity contributes to ambiguity rather than clarity. The first sentence establishes the event nature; the second mixes indexing info with subscription instructions. Front-loading is adequate, but the content is too compressed to resolve the tool's fundamental purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a governance tool with 3 parameters, no annotations, and no output schema, the description provides insufficient domain context. It doesn't explain what contract this interacts with, what pricing configuration governs, or how this relates to the protocol's fee/reserve system. The sibling GovSet* tools suggest a governance suite, but the description doesn't situate this tool within that ecosystem.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description adds value by noting 'underlying' is indexed and filterable. However, it does not explain what pricing types/parameters are valid (enum values), what they represent semantically (fee tiers, interest models?), or why they are marked as 'new' (implying state change vs query filter).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an 'Event emitted by the contract' and mentions 'Subscribe via log filters,' which suggests a query/subscription tool. However, the required parameters named 'newPricingType' and 'newPricingParameter' strongly imply this is actually a state-changing transaction (governance action) rather than a passive subscription. The description frames the tool as the event itself rather than the action that triggers it, creating fundamental ambiguity about whether this tool reads or writes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus sibling governance tools like GovSetAssetConfig, GovSetIRM, or GovSetReserveFee. No mention of prerequisites (e.g., governance permissions) or when this configuration change is appropriate versus other asset configuration tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GovSetReserveFeeCInspect
Event emitted by the contract. Indexed fields (filterable): underlying. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| newReserveFee | Yes | uint32 (uint32, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions indexed fields but fails to explain what triggers this event, whether subscription requires specific permissions, what the return format looks like, or the business significance of the reserve fee being set.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently packaged into two sentences with no redundancy. However, the critical information that this is an event subscription (not the setter function) appears in the second sentence rather than being front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an event subscription tool with no output schema, the description fails to explain the business context (what is a reserve fee? why does it change?) or describe the subscription lifecycle. It also doesn't resolve the tension between the action-oriented name and the event-based description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description notes that 'underlying' is filterable, which aligns with the schema's '(indexed)' notation but doesn't add semantic meaning (e.g., that underlying refers to the asset address or explain what newReserveFee represents in business terms).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it subscribes to contract events via log filters, which clarifies the mechanism. However, the tool name 'GovSetReserveFee' strongly implies an action to set the fee, while the description reveals it's actually an event subscription. This naming mismatch creates ambiguity about whether this tool performs state changes or merely observes them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Subscribe via log filters,' indicating the technical mechanism, but provides no contextual guidance on when to monitor this event versus using other governance tools, nor does it distinguish between subscribing to future events versus querying historical ones.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
InstallerInstallModuleBInspect
Event emitted by the contract. Indexed fields (filterable): moduleId, moduleImpl. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| moduleId | Yes | uint256 (uint256, pass as decimal string) (indexed). | |
| moduleImpl | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| moduleGitCommit | Yes | bytes32 (32-byte hex string, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet the description lacks critical behavioral details: whether this returns historical logs or establishes a real-time subscription, output format, pagination behavior, or required node capabilities (eth_subscribe vs eth_getLogs). 'Event emitted by the contract' describes the event semantics but not the tool's execution behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loads the event identification, specifies filterable fields, and concludes with usage instruction. No redundant words or generic filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple event subscription tool with complete parameter schema coverage. However, lacking output schema means the description should ideally mention return format or subscription lifecycle, which it does not. Sufficient for basic usage but leaves operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by explicitly identifying which schema parameters are indexed/filterable (moduleId, moduleImpl), which is crucial context for Ethereum log filtering not fully conveyed by the schema's '(indexed)' parentheticals alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States it represents a contract event with specific indexed fields, and indicates the action is to subscribe via log filters. Specific to module installation events, distinguishing from transaction siblings like Borrow/Deposit. Could be clearer about what 'InstallerInstallModule' actually signifies (new module installation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides the mechanism ('Subscribe via log filters') but fails to specify when to use this versus querying moduleIdToImplementation or listening to other admin events like InstallerSetGovernorAdmin. No prerequisites or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
InstallerSetGovernorAdminCInspect
Event emitted by the contract. Indexed fields (filterable): newGovernorAdmin. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| newGovernorAdmin | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It mentions the event is 'filterable' and 'indexed', but lacks critical behavioral details: whether this returns historical logs or streams new ones, the subscription lifecycle, output format, polling mechanism, or rate limits. 'Subscribe via log filters' hints at behavior but is insufficient for an event subscription tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: declaration of event nature, specification of indexed/filterable fields, and subscription mechanism. No redundancy or filler. Front-loaded with the essential classification (event vs action).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an event subscription tool with no output schema, the description fails to describe what data is returned (event log structure, block number, transaction hash), nor does it explain the contract context (which contract emits this?). The blockchain domain complexity demands more completeness given the minimal structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the parameter fully documented as an Ethereum address. The description adds value by noting the field is 'indexed' and 'filterable'—critical blockchain-specific context for log filtering that the schema doesn't convey. Baseline 3 for high schema coverage; reaches 3 with modest added context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clarifies that despite the action-oriented name 'InstallerSetGovernorAdmin', this tool actually subscribes to contract events (not performs the action). It specifies the specific event type (governance admin change) and identifies it as emitted by the contract, distinguishing it from transaction-based siblings like Borrow or Deposit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it mentions 'Subscribe via log filters', there is no explicit guidance on when to use this tool versus querying historical events, nor does it distinguish from similar event-subscription siblings like InstallerSetUpgradeAdmin. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
InstallerSetUpgradeAdminAInspect
Event emitted by the contract. Indexed fields (filterable): newUpgradeAdmin. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| newUpgradeAdmin | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It adequately discloses this is a read-only observation of contract logs ('Event emitted', 'Subscribe') rather than a state-mutating transaction, and notes the filterable nature of the indexed address field. However, it omits details about log format, block confirmation requirements, or subscription lifecycle.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at two sentences. The first establishes the event nature, the second specifies the indexed field and subscription mechanism. No tautology or redundant repetition of the tool name. Information density is appropriate for this simple, single-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple event subscription with one parameter and no output schema. The description covers the essential contract interaction pattern (event subscription) and identifies the key data field. However, given the administrative sensitivity implied by 'UpgradeAdmin', it could clarify the security implications or relationship to the proxy upgrade pattern suggested by sibling tools like 'ProxyCreated'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the parameter is well-documented in the schema as an Ethereum address. The description reinforces the 'filterable' aspect (indexed) which is critical for efficient log queries in EVM systems. No additional syntax constraints or example addresses are provided, warranting a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as an event subscription tool ('Event emitted by the contract... Subscribe via log filters') rather than a transaction, distinguishing it from the imperative-sounding name 'Set'. It specifies the resource (contract event) and the indexed field (newUpgradeAdmin), though it could explicitly state this tracks administrative privilege transfers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit mechanics ('Subscribe via log filters') which is crucial for blockchain event tools, but lacks strategic guidance on when to monitor this event (e.g., detecting admin changes) or how it differs from the similar 'InstallerSetGovernorAdmin' sibling. No mention of prerequisites like connection to an Ethereum node.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
LiquidationBInspect
Event emitted by the contract. Indexed fields (filterable): liquidator, violator, underlying. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| repay | Yes | uint256 (uint256, pass as decimal string). | |
| yield | Yes | uint256 (uint256, pass as decimal string). | |
| discount | Yes | uint256 (uint256, pass as decimal string). | |
| violator | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| collateral | Yes | address (Ethereum address, 0x-prefixed). | |
| liquidator | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| healthScore | Yes | uint256 (uint256, pass as decimal string). | |
| baseDiscount | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and discloses key traits: the event-driven nature, filterability of indexed fields, and subscription mechanism. However, it lacks details on data volume, latency, rate limits, or error handling that would help agents predict subscription behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: front-loaded with the event type, followed by indexed field specifics, and closing with the subscription mechanism. Every sentence serves a distinct purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage on 9 parameters and no output schema, the description adequately references the technical structure. However, for a complex DeFi domain tool, it omits domain context (what constitutes a liquidation, implications of health scores) that would help agents understand the event's business significance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by explicitly grouping the three indexed fields (liquidator, violator, underlying) and labeling them as 'filterable', which reinforces their queryable nature beyond the schema's individual parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an 'Event emitted by the contract' with specific indexed fields (liquidator, violator, underlying), clearly indicating it monitors liquidation events rather than executing them. However, it does not explicitly differentiate from sibling 'RequestLiquidate', which could confuse agents on whether this performs liquidations or merely observes them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While 'Subscribe via log filters' hints at monitoring usage, the description lacks explicit guidance on when to use this passive subscription versus the active 'RequestLiquidate' execution tool. Given the high cost of confusing observation with transaction execution in blockchain contexts, this omission is significant.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
MarketActivatedBInspect
Event emitted by the contract. Indexed fields (filterable): underlying, eToken, dToken. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| dToken | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| eToken | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description establishes this is an event with filterable indexed fields and a subscription mechanism. However, it fails to explain the semantic meaning of 'MarketActivated' (what triggers this event, what state change it represents) or the format of the subscription response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient fragments with no redundant words. Front-loaded with the event nature, followed by filterable fields and subscription method. Slightly choppy structure prevents a 5.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a basic event subscription tool with no output schema defined. Covers the essential filterable parameters, but lacks context on event触发 conditions (e.g., governance action, initialization) and expected payload structure when the event fires.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description reinforces that the three parameters are indexed and filterable, which adds contextual meaning for log filtering, but does not add syntactic details beyond the schema's Ethereum address descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as an event subscription tool ("Event emitted by the contract... Subscribe via log filters") and specifies the three key indexed parameters. However, it doesn't differentiate from similar event tools like PTokenActivated or AssetStatus in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance by mentioning 'Subscribe via log filters' and identifying which fields are filterable. However, lacks explicit guidance on when to monitor this specific event versus other protocol events (e.g., during market setup vs ongoing operations) or prerequisites for subscription.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
moduleIdToImplementationAInspect
Get moduleIdToImplementation(moduleId: string). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns address.
| Name | Required | Description | Default |
|---|---|---|---|
| moduleId | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure given zero annotations: explicitly states 'does not modify contract state' (safety), 'any address can call' (permissions), and 'Returns address' (return type). Missing only edge case details or rate limit warnings typical of blockchain read operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four efficient sentences with zero redundancy: function signature pattern, safety declaration, permission declaration, and return type. Front-loaded with the operation name and well-structured for rapid parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a simple 1-parameter getter without output schema. Describes the return value (address) compensating for missing output schema. Minor gap: could clarify that 'implementation' refers to the underlying logic contract in a proxy pattern.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage describing moduleId as uint256 decimal string. Description mentions '(moduleId: string)' but adds no semantic meaning beyond the schema's technical specification. Baseline 3 appropriate when schema carries full load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the tool retrieves an implementation address using a module ID, with 'Returns address' clarifying the output. Distinguishes implicitly from sibling moduleIdToProxy (implementation vs proxy), though could explicitly clarify this distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance through 'Read-only' and 'Unrestricted' labels, indicating safe, permissionless calling. However, lacks explicit comparison to moduleIdToProxy or guidance on when to query implementation vs proxy addresses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
moduleIdToProxyAInspect
Get moduleIdToProxy(moduleId: string). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns address.
| Name | Required | Description | Default |
|---|---|---|---|
| moduleId | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden and succeeds well: explicitly states 'Read-only', 'does not modify contract state', 'Unrestricted' access, and return type (address). No dangerous side effects or authentication requirements are omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise three-sentence structure with zero redundancy. Information is front-loaded: function signature implication, safety guarantees (read-only), access controls (unrestricted), and return value are all conveyed efficiently without filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple single-parameter getter function. Without an output schema, the description explicitly states the return type ('Returns address'), compensating for that gap. No essential context is missing for invoking this read function, though a brief note distinguishing 'proxy' from 'implementation' would strengthen it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and already documents the uint256 format requirement. The description mentions 'moduleId: string' but adds no semantic meaning (e.g., what constitutes a valid moduleId, expected value ranges, or relationship to the proxy system) beyond the parameter name and type already visible in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the tool retrieves a proxy address for a given moduleId and explicitly notes it returns an address. The opening 'Get moduleIdToProxy' is slightly tautological, but the description clarifies the resource type (proxy contract address) and distinguishes this as a read operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides access control context ('Unrestricted — any address can call'), indicating no special permissions needed. However, it fails to differentiate from sibling tool 'moduleIdToImplementation' or explain when to prefer proxy addresses over implementation addresses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nameAInspect
Get name(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns string.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With zero annotations provided, the description carries the full burden of disclosure. It successfully communicates three critical behavioral traits: immutability ('does not modify contract state'), permissionlessness ('any address can call'), and return type ('Returns string'). It lacks detail on error conditions or gas costs, but covers the essential safety profile for a simple getter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste. Information is optimally ordered: purpose first, safety properties second, access control third, return value fourth. The em-dashes efficiently separate clauses without verbosity. Appropriate length for a zero-parameter read function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (no params, no nested objects) and absence of output schema, the description adequately compensates by disclosing the return type ('Returns string'). It appropriately omits over-specification for a standard blockchain getter function, though explicitly stating this retrieves the contract's human-readable name would provide perfect completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters. Per evaluation rules, 0 params establishes a baseline score of 4. The description appropriately does not invent parameter documentation where none exist, though it could have clarified that no inputs are required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Get') and resource ('name'), with the parenthetical syntax suggesting a function call. It effectively distinguishes from state-modifying siblings (Borrow, Deposit, Liquidation, etc.) by explicitly noting it is read-only and unrestricted. It could be improved by specifying what 'name' refers to (e.g., contract/token name), but remains clear in context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear contextual signals distinguishing this from alternatives: 'Read-only' differentiates it from state-modifying transaction tools, and 'Unrestricted' indicates no authentication requirements unlikeGov- or Admin- prefixed siblings. While it doesn't explicitly say 'use this when you need to identify the contract,' the behavioral constraints provide clear selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ProxyCreatedBInspect
Event emitted by the contract. Indexed fields (filterable): proxy. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| proxy | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| moduleId | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of behavioral disclosure. It successfully identifies that 'proxy' is an indexed/filterable field and mentions the subscription mechanism. However, it omits whether this captures historical logs or real-time events, polling behavior, or any authentication requirements typical of blockchain log filtering.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three short, efficient sentences with no redundancy. It front-loads the event type identification, follows with key technical constraints (indexed fields), and ends with the usage mechanism. Every sentence earns its place with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 2-parameter schema with 100% coverage and no output schema, the description is minimally sufficient. However, it lacks domain-specific context (what contract emits this, what constitutes a 'proxy' in this system, when this event fires in the protocol lifecycle) that would help an agent select this among the many event-related siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds marginal value by explicitly calling out 'proxy' as the indexed field in the text, reinforcing the schema documentation. No additional semantic context (e.g., what moduleId represents in the protocol) is provided beyond the schema's type information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as an 'Event emitted by the contract' combined with the name 'ProxyCreated', making it unambiguous that this listens for proxy creation events rather than creating proxies (distinguishing it from action-oriented siblings like Borrow or Deposit). It mentions the specific indexed field 'proxy', adding technical specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Subscribe via log filters' which hints at the subscription mechanism, but provides no guidance on when to use this event versus querying alternative tools like 'moduleIdToProxy' or listening to other events like 'MarketActivated'. No explicit when-to-use or when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
PTokenActivatedBInspect
Event emitted by the contract. Indexed fields (filterable): underlying, pToken. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| pToken | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full disclosure burden. It establishes this is an event (not a function call) and identifies filterable fields, but omits trigger conditions, emission frequency, payload structure, or whether this indicates state change vs initialization.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with minimal waste. Information is front-loaded (event type first), though 'Event emitted by the contract' is somewhat redundant with the tool naming convention. Structure is clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple two-parameter schema and lack of output schema, the description covers basic mechanics but leaves significant gaps in domain context. For an event tool, it should explain what activation signifies in the protocol lifecycle and what listening applications should expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters fully documented as Ethereum addresses. The description repeats the field names ('underlying, pToken') but adds no semantic value beyond the schema—no format guidance, validation rules, or relationship between the two addresses.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as a contract event mentioning indexed fields, but fails to explain what 'PToken activation' actually means (e.g., initialization, first deposit, governance action) or when this event fires. It restates the name without clarifying the domain concept.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides basic usage instruction ('Subscribe via log filters') and notes filterable fields, but offers no guidance on when to subscribe to this event versus sibling events like MarketActivated, ProxyCreated, or Genesis. No prerequisites or authentication context provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
PTokenUnWrapBInspect
Event emitted by the contract. Indexed fields (filterable): underlying, account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Lacks annotations, so the description carries full behavioral burden. It mentions indexed fields but omits whether this is read-only, what the subscription returns (event logs vs processed data), or lifecycle details like historical scanning versus real-time monitoring.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences with no redundancy. Information is front-loaded with the event nature declared first, followed by filterable fields and subscription method. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple event subscription with good schema coverage, but should explain what the event represents (unwrapping operation) and what data structure the subscriber receives since no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with type descriptions. The description adds crucial semantic context that indexed fields are 'filterable,' explaining the query capability beyond the schema's technical '(indexed)' tags for the underlying and account parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States this subscribes to contract events via log filters, distinguishing it from transaction-based siblings like Withdraw or PTokenWrap. However, it fails to explain what 'PTokenUnWrap' signifies (unwrapping tokens) or the business context of the event.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides minimal guidance beyond 'Subscribe via log filters.' Does not specify when to monitor this event versus calling action methods, nor does it mention prerequisites like node connections or blockchain access requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
PTokenWrapBInspect
Event emitted by the contract. Indexed fields (filterable): underlying, account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden. It identifies the event nature and indexed fields but lacks critical behavioral context: what wrapping entails, whether subscription is polling or streaming, expected return format, or any rate limiting considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely compact at two sentences with zero waste. Front-loaded with the event nature, followed immediately by actionable filter parameters and subscription method. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for an event subscription tool with complete parameter documentation (100% coverage), but missing the semantic meaning of the 'Wrap' operation itself and its relationship to the PTokenUnWrap sibling event. No output schema is present but expected for event subscriptions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage where individual parameters are documented, the description adds valuable semantic grouping by explicitly identifying which fields are 'Indexed' and 'filterable' (underlying, account). This connects the schema technicalities to their functional purpose in log filtering.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as an event subscription tool ('Event emitted by the contract', 'Subscribe via log filters'), distinguishing it from transaction-based siblings like Deposit or Borrow. However, it fails to explain what 'PTokenWrap' semantically represents (token wrapping mechanics), leaving domain context unexplained.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it mentions the subscription mechanism ('Subscribe via log filters'), it provides no guidance on when to monitor PTokenWrap events versus related events like PTokenUnWrap or MarketActivated, nor does it explain typical use cases for tracking wrap operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RepayAInspect
Event emitted by the contract. Indexed fields (filterable): underlying, account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It successfully identifies the indexed fields for filtering, but omits behavioral details such as whether this uses polling or websockets, block range limits, or the structure of returned log entries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at three short clauses. The key information (event nature, indexed fields, subscription method) is front-loaded with zero redundancy or generic filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description adequately covers the input parameters via schema + indexed field notes. However, it should ideally disclose what data structure is returned (raw logs, decoded event objects) or mention pagination for historical queries.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing clear typing for all three parameters. The description adds valuable semantic context by explicitly identifying which parameters are indexed ('underlying', 'account'), which is critical for constructing efficient Ethereum log filters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an event ('Event emitted by the contract') and specifies the action ('Subscribe via log filters'), distinguishing it from action-oriented siblings like 'RequestRepay'. However, it could strengthen the action verb by stating 'Subscribe to Repay events' explicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through 'Subscribe via log filters' and notes which fields are filterable ('Indexed fields: underlying, account'). However, it lacks explicit contrast with 'RequestRepay' to clarify when to monitor events versus initiate transactions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RequestBorrowCInspect
Event emitted by the contract. Indexed fields (filterable): account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description adds useful context that 'account' is indexed/filterable and mentions subscription via log filters, but omits critical behavioral details like return format, side effects, or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with minimal waste. Front-loaded with event identification. Flow is somewhat choppy but information density is high for the length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacking both annotations and output schema, the description should explain what the tool returns (event logs, subscription handles) and execution behavior. This critical gap leaves the agent uncertain about invocation results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete type documentation for both parameters. Description notes the indexed nature of 'account', though this is already present in the schema. Baseline score appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies the tool relates to contract events (specifically borrow requests) and mentions log filters, but lacks clarity on whether this queries historical events or creates subscriptions, and fails to distinguish from the 'Borrow' action sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'Subscribe via log filters' providing minimal context, but offers no explicit guidance on when to use this event listener versus the Borrow execution tool or other Request* siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RequestBurnCInspect
Event emitted by the contract. Indexed fields (filterable): account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full behavioral burden but only discloses that fields are indexed and filterable. It does not clarify if invoking this tool performs a blockchain write (emitting the event) or establishes a read subscription, nor does it mention gas costs, confirmation times, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately terse (three fragments) but inefficiently structured. The content is front-loaded with tautological event classification rather than tool action, and the 'Subscribe via log filters' clause is ambiguous without context on how the parameters map to filter topics.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a blockchain interaction tool with no annotations and no output schema, the description fails to establish the tool modality (transaction vs. subscription), expected confirmation behavior, or relationship to the emitted event. Critical gaps remain for an agent to invoke this correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description mentions 'Indexed fields (filterable): account' which reinforces the schema's '(indexed)' notation but adds no new semantic context about what 'amount' represents (e.g., burn amount in wei/token units) or valid ranges.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Event emitted by the contract' but fails to clarify what the tool itself does—whether it emits/submits this event, subscribes to it, or queries historical occurrences. The name 'RequestBurn' suggests an action-oriented transaction tool, while the description treats it as a passive event description, creating ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus siblings like RequestWithdraw, RequestTransferDToken, or the non-Request variants. No prerequisites, preconditions, or filtering strategies are mentioned despite the 'filterable' hint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RequestDepositCInspect
Event emitted by the contract. Indexed fields (filterable): account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, placing full burden on the description. The text mentions subscription but fails to disclose whether this polls historical logs, streams real-time events, response format, or rate limits. It does not clarify if calling this tool initiates a state-changing transaction (despite 'Request' in the name) or purely reads event data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief at two sentences and front-loads the event type identification. However, the second sentence is somewhat fragmented ('Indexed fields... Subscribe via...'), slightly disrupting flow while remaining information-dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema or annotations, the description should explain what the subscription returns (e.g., event log structure, block confirmation details). It also fails to resolve the ambiguity between the 'RequestDeposit' event and the 'Deposit' action, leaving significant contextual gaps for a blockchain interaction tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds valuable semantic context by stating that indexed fields are 'filterable,' explaining the utility of the account parameter's indexed property beyond what the schema alone conveys. This helps users understand how to effectively filter subscriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the tool relates to a contract event and mentions 'Subscribe via log filters,' establishing it handles event subscriptions. However, it does not clearly distinguish this 'RequestDeposit' event from the 'Deposit' sibling tool or clarify why one would subscribe to this specific event versus using the action-oriented Deposit tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it mentions the mechanism ('Subscribe via log filters'), it provides no guidance on when to use this subscription-based tool versus the 'Deposit' or other sibling tools. It lacks prerequisites, filtering strategy guidance, or use-case scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RequestLiquidateCInspect
Event emitted by the contract. Indexed fields (filterable): liquidator, violator, underlying. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| repay | Yes | uint256 (uint256, pass as decimal string). | |
| minYield | Yes | uint256 (uint256, pass as decimal string). | |
| violator | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| collateral | Yes | address (Ethereum address, 0x-prefixed). | |
| liquidator | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While the description mentions indexed fields are filterable and implies subscription behavior ('Subscribe via log filters'), it provides insufficient behavioral context for a complex DeFi operation. No information is given about liquidation incentives, gas costs, failure modes, or the relationship between the 'Request' event and the actual liquidation execution.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three brief sentences. While efficient in length, the structure is fragmented and front-loads the tautological 'Event emitted by the contract' rather than the actionable purpose. The crucial 'Subscribe' information appears at the end without clear connection to the event description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of liquidation mechanics in DeFi protocols and the absence of annotations or output schema, the description is inadequate. It fails to explain liquidation economics, the role of the liquidator/violator, or what successful invocation achieves.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions that liquidator, violator, and underlying are indexed/filterable, though this information is already present in the individual parameter descriptions within the schema. No additional semantic context is added for the remaining parameters (collateral, repay, minYield) beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Event emitted by the contract' and 'Subscribe via log filters', but fails to clearly articulate whether this tool initiates a liquidation request or subscribes to liquidation events. The passive phrasing ('Event emitted') describes the subject matter rather than the tool's action, creating ambiguity about its actual function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus the 'Liquidation' sibling tool, nor are prerequisites (such as health factor violations or approval requirements) mentioned. The description lacks any 'when to use' or 'when not to use' context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RequestMintCInspect
Event emitted by the contract. Indexed fields (filterable): account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
_with no annotations provided, the description must carry the full behavioral burden. It mentions 'Subscribe via log filters' and that the account field is 'filterable', giving some hint about过滤 capabilities, but lacks critical details: return format (subscription ID? array of events?), persistence, callback mechanism, or whether this triggers a mint vs observes it._
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
_extremely terse at three sentence fragments; while no words are wasted, the extreme brevity comes at the cost of clarity. The front-loading of 'Event emitted by the contract' prioritizes taxonomy over action, forcing the user to infer the tool's function from the final clause._
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
_given the simple 2-parameter schema with 100% coverage and no output schema, the description adequately identifies the parameters but fails to explain what a successful invocation returns or how the subscription lifecycle works. Acceptable for a simple event-subscription signature, but minimal._
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
_schema coverage is 100%, establishing a baseline of 3. The description adds the term 'filterable' for the indexed account field, which slightly augments the schema's '(indexed)' notation, but does not elaborate on the amount parameter's units ( wei? whole tokens?) or provide syntax examples beyond what the schema already states._
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
_identifies the domain (blockchain contract event) and mentions the resource (mint), but the phrasing 'Event emitted by the contract' describes what the event is rather than what the tool does. Only the final phrase 'Subscribe via log filters' indicates the action, leaving ambiguity about whether this tool queries historical events or registers a live subscription._
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
_provides no guidance on when to use RequestMint vs siblings like RequestBurn, Deposit, or the other Request* variants. No mention of prerequisites (e.g., address format requirements) or what constitutes a valid mint request._
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RequestRepayCInspect
Event emitted by the contract. Indexed fields (filterable): account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Subscribe via log filters' suggesting read-only event subscription, but does not clarify return format (event logs vs transaction receipt), pagination limits for historical queries, or whether this blocks for real-time events. The imperative tool name creates uncertainty about side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two brief sentences achieve brevity, but the opening 'Event emitted by the contract' is tautological and contributes to the action-vs-event ambiguity rather than clarifying. The sentence structure packs filter information efficiently but wastes space on obvious contract terminology instead of clarifying tool behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 2 required parameters, no annotations, and no output schema, the description inadequately explains the RequestRepay event's role in the lending protocol lifecycle or its relationship to the Repay action. It omits critical context like time-range support for log filtering, confirmation requirements, or auth needs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage detailing Ethereum types (address, uint256). The description adds value by noting 'account' is an indexed/filterable field, which helps with log filter construction. However, it does not explain the semantic meaning of 'amount' (e.g., repayment quantity in wei) or why these specific parameters are required for subscription.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it represents an 'Event emitted by the contract' for log filter subscriptions, but this conflicts with the imperative tool name 'RequestRepay' and required action-oriented parameters (account, amount), creating ambiguity about whether this tool performs a repayment request action or subscribes to events. It fails to distinguish from the 'Repay' sibling or clarify the event-versus-action nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the 'Repay' sibling tool or other 'Request*' prefixed tools. Does not indicate whether this is for monitoring historical events, subscribing to real-time events, or initiating transactions, leaving the agent without selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RequestSwapCInspect
Event emitted by the contract. Indexed fields (filterable): accountIn, accountOut, underlyingIn. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| swapType | Yes | uint256 (uint256, pass as decimal string). | |
| accountIn | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| accountOut | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlyingIn | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlyingOut | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full disclosure burden. It mentions indexed fields (accountIn, accountOut, underlyingIn) which aids filtering logic, but fails to explain subscription lifecycle, return format, authentication requirements, or state side effects. It does not clarify if the subscription is synchronous or how log filters are managed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The three-sentence structure is appropriately terse, but the brevity exacerbates the ambiguity around whether this tool initiates swaps or subscribes to events. Each sentence conveys distinct information, yet the front-loaded 'Event emitted' framing conflicts with the imperative tool name pattern seen in siblings.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 6-parameter event subscription tool with no output schema, the description lacks critical context: subscription handling (polling vs streaming), event confirmation semantics, or return data structure. The tool's relationship to the contract's state (read-only observation vs interaction) is assumed but not stated, leaving significant gaps for an AI agent attempting to orchestrate complex DeFi workflows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'Indexed fields (filterable): accountIn, accountOut, underlyingIn', but this merely duplicates the '(indexed)' markers already present in schema field descriptions. No additional semantic context (e.g., valid swapType values, address zero semantics) is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an event subscription tool ('Event emitted by the contract', 'Subscribe via log filters'), but creates ambiguity given the tool name 'RequestSwap' aligns with action-oriented siblings like RequestBorrow and RequestDeposit. It does not clarify whether this tool initiates requests or merely observes them, nor does it distinguish from similar event tools (MarketActivated, ProxyCreated).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides only the subscription mechanism ('Subscribe via log filters') with no guidance on when to choose this over other Request* siblings or when to use event subscription versus direct contract interaction. No mention of prerequisites, filtering strategies, or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RequestTransferDTokenCInspect
Event emitted by the contract. Indexed fields (filterable): from, to. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| from | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| amount | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but provides confusing behavioral signals. It suggests log-filter subscription behavior rather than explaining that this likely initiates a state-changing token transfer. It fails to disclose side effects (e.g., token movement, fees), success criteria, or whether the transfer is immediate or queued.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief, the description is structured as fragmented sentences ('Event emitted by the contract.', 'Indexed fields...', 'Subscribe via log filters.') that scan awkwardly. The front-loaded focus on event emission rather than tool action creates immediate confusion.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Missing critical context for a DeFi transfer tool: no explanation of what DTokens represent (debt tokens), no output schema description (return values, transaction hashes), no mention of slippage, authorization, or confirmation requirements. The event-focused description distracts from the necessary operational details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds valuable blockchain-specific context by identifying which parameters are indexed and filterable on-chain ('Indexed fields: from, to'), which aids in understanding log filtering capabilities even if the primary tool purpose is misidentified.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description fails to state that this tool initiates a transfer request for DTokens. Instead, it describes an event emission/subscription ('Event emitted by the contract... Subscribe via log filters'), which contradicts the action-oriented 'Request' naming convention and the required value parameters (from, to, amount). It does not clarify whether this executes a transfer, creates a request, or queries events.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings like RequestTransferEToken, RequestWithdraw, or Deposit. There is no mention of prerequisites, authorization requirements, or workflow context within the DeFi protocol.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RequestTransferETokenCInspect
Event emitted by the contract. Indexed fields (filterable): from, to. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| from | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| amount | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It adds value by explaining that from/to are filterable due to being indexed, and clarifies this is a subscription action. However, it lacks details on return format, streaming vs polling behavior, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The text is brief but inefficiently structured. The opening sentence 'Event emitted by the contract' wastes space with vague passivity and fails to front-load the actual tool purpose (subscribing to logs). The indexed fields note is useful but fragmented.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema provided, the description should explain what the subscription returns (event logs, transaction receipts). It also omits what the RequestTransferEToken event represents in the protocol context and how the amount parameter behaves differently than indexed fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents the Ethereum address and uint256 types. The description mentions from/to to emphasize they are filterable, adding slight semantic value, but largely restates the schema's '(indexed)' markers without adding syntax examples or explaining the amount parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the tool as a log subscription for an event and notes the indexed fields, but 'Event emitted by the contract' is vague and passive. It fails to distinguish from the similar sibling RequestTransferDToken or specify which contract emits the event.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to subscribe to this event versus using direct state queries, or how it differs from RequestTransferDToken. No prerequisites, alternatives, or usage patterns are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
RequestWithdrawBInspect
Event emitted by the contract. Indexed fields (filterable): account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully identifies the tool as an event subscription with filterable indexed fields ('account'), which is essential context. However, it lacks details on subscription lifecycle (continuous vs one-time), output format, or failure modes that would fully prepare an agent to invoke the tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences efficiently convey the essential information without redundancy. The structure is front-loaded (event type first, then filter details, then subscription method), with every sentence earning its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a two-parameter tool with complete schema documentation and no output schema, the description is minimally adequate. It correctly identifies the operational model (event subscription), but lacks domain context (e.g., what assets are being withdrawn, the relationship to the Withdraw function) that would elevate it for a complex DeFi protocol toolset.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (both 'account' and 'amount' are fully documented with types and formats), establishing a baseline score of 3. The description adds minimal semantic value beyond the schema, only noting that 'account' is indexed/filterable—a fact already present in the schema's '(indexed)' notation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the tool as an event subscription service ('Event emitted by the contract', 'Subscribe via log filters'), which provides a clear technical purpose. However, it fails to explain what the RequestWithdraw event represents semantically (e.g., a user requesting to withdraw assets), leaving the agent to infer business logic from the tool name alone. It minimally distinguishes from action-based siblings like 'Withdraw' by labeling itself as an event.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the subscription mechanism ('Subscribe via log filters') but provides no guidance on when to use this event subscription versus calling the 'Withdraw' action or other alternatives. There are no 'when-not-to-use' exclusions or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
TrackAverageLiquidityCInspect
Event emitted by the contract. Indexed fields (filterable): account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Fails to clarify whether this is a state-changing transaction (enabling tracking) or a read-only subscription, what the return format is, or what 'AverageLiquidity' represents. 'Event emitted by the contract' is metadata about the underlying smart contract, not behavioral transparency for the tool invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences that are information-dense but poorly structured. The description leads with event metadata rather than tool action, and ends with the subscription mechanism. Front-loading would improve clarity (e.g., 'Subscribe to AverageLiquidity tracking events...').
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a blockchain interaction tool with no output schema and no annotations, the description omits critical context: return format (event logs vs transaction receipt), whether this creates a persistent subscription or one-time query, and what liquidity data is actually tracked.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description adds valuable context that the account parameter is 'Indexed fields (filterable)', which indicates it can be used as a topic filter in log subscriptions. This adds semantic meaning beyond the schema's type description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it is an 'Event emitted by the contract' which describes the blockchain event rather than what the tool does (e.g., 'Subscribe to...' or 'Query...'). The verb 'Track' in the name implies action, but the description is passive and confusing regarding whether this tool enables tracking, queries historical events, or subscribes to future events.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus sibling UnTrackAverageLiquidity, or how it relates to liquidity calculation in borrowing/depositing workflows. The phrase 'Subscribe via log filters' hints at usage but lacks context on when this subscription is necessary versus querying state directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
UnTrackAverageLiquidityBInspect
Event emitted by the contract. Indexed fields (filterable): account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that 'account' is an indexed/filterable field and mentions the subscription mechanism, but fails to explain side effects, authentication requirements, or what the subscription returns (event payload structure).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse (three sentence fragments) with no redundant text. Front-loads the event nature, though the fragmented structure slightly impairs readability. Every phrase conveys essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter with complete schema coverage and no output schema, the description is minimally adequate. However, for an event subscription tool with zero annotations, it should explain the event payload structure or subscription lifecycle beyond just noting the indexed field.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage describing 'account' as an Ethereum address, the description adds valuable semantic context that this field is 'indexed' and 'filterable', helping the agent understand how to construct efficient log filter queries.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an event subscription tool ('Subscribe via log filters') and specifies the contract event type, but the phrasing 'Event emitted by the contract' is declarative rather than imperative, creating ambiguity with the action-oriented name 'UnTrackAverageLiquidity' (unclear if this unsubscribes or subscribes to untrack events).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to subscribe to this specific event versus sibling events like TrackAverageLiquidity, nor when log filters are preferred over other subscription methods. The instruction 'Subscribe via log filters' states how but not when.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
WithdrawBInspect
Event emitted by the contract. Indexed fields (filterable): underlying, account. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | uint256 (uint256, pass as decimal string). | |
| account | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| underlying | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It correctly identifies this as an event emission (implying read-only observation) and notes the indexed fields, but it omits critical behavioral details like whether this creates a persistent subscription or one-time query, and what the output format or event lifecycle looks like.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The three-sentence structure is highly efficient with zero waste: first sentence establishes the event nature, second specifies filterable indexed fields, and third provides the subscription mechanism. Every sentence earns its place with relevant technical detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, and the complexity of the surrounding toolset (28 siblings including many related operations), the description provides minimum viable context but should explain the Withdraw event's business logic and relationship to the RequestWithdraw function to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear type documentation for all three parameters. The description adds value by explicitly noting that 'underlying' and 'account' are indexed/filterable fields, which is crucial context for Ethereum log filtering beyond the raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an event subscription tool ('Subscribe via log filters') and mentions specific filterable fields, but it fails to explain what the 'Withdraw' event semantically represents (e.g., user removing liquidity from a pool) or how it differs from sibling events like 'Deposit' or 'Borrow'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides the mechanism ('Subscribe via log filters') and mentions indexed fields can be filtered, but it lacks explicit guidance on when to use this event subscription versus the 'RequestWithdraw' function sibling, or whether this queries historical events versus subscribing to new ones.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!