Skip to main content
Glama

Tenzro LayerZero V2 MCP

Server Details

LayerZero V2 MCP: EndpointV2 quotes, OFT transfers, Stargate V2, Value Transfer API, DVNs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
tenzro/tenzro-network
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 20 of 20 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct operation (quoting, sending, tracking, listing) on specific sub-protocols (OFT, Stargate, generic messaging, value transfer). Tools like lz_get_message, lz_get_messages_by_address, and lz_track_message are clearly differentiated by input parameter (GUID, address, tx hash).

Naming Consistency5/5

All tools follow a consistent 'lz_verb_noun' pattern (e.g., lz_list_chains, lz_quote_fee, lz_send_message). The prefix 'lz_' is used uniformly, and verbs are aligned with typical CRUD/action terms (list, get, quote, send, track).

Tool Count5/5

20 tools is well-scoped for a server covering multiple cross-chain protocols (generic messaging, OFT, Stargate, value transfer) plus auxiliary info (chains, deployments, DVNs). Each tool serves a specific purpose without redundancy.

Completeness4/5

The tool set covers the full lifecycle: quoting, building calldata, message tracking, and lookup for all major LayerZero V2 features. Minor gaps like composing with airdrop or advanced permissioned options are not addressed, but core workflows are complete.

Available Tools

20 tools
lz_encode_optionsAInspect

Encode LayerZero TYPE_3 options bytes for use with EndpointV2.quote() and EndpointV2.send(). TYPE_3 (version tag 0x0003) is the current standard options format for LayerZero V2. Builds binary options with executor worker ID 0x01 and option type 1 (lzReceive) with configurable gas limit and native token drop amount. Returns the hex-encoded options bytes.

ParametersJSON Schema
NameRequiredDescriptionDefault
gas_limitNoGas limit for lzReceive execution on destination (default: 200000)
native_dropNoNative token amount to drop on destination in wei (default: 0)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that the tool builds binary options and returns hex-encoded bytes, which is sufficient for a pure encoding function. However, it could mention constraints like valid gas limit ranges or failure behavior, though not critical for basic usage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with purpose and technical details, with no extraneous text. Every sentence provides necessary context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the return format (hex-encoded options bytes). It covers parameters, defaults, and integration context. For a simple encoding tool, this is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear defaults (200000 gas, 0 native drop) and formatting. The description adds context that parameters are optional and defaults are applied, and that output is hex-encoded. This fully compensates for the lack of explicit parameter documentation beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool encodes LayerZero TYPE_3 options bytes for EndpointV2.quote() and EndpointV2.send(), specifying version tag, executor worker ID, option type, and configurable gas limit/drop. This is highly specific and distinguishes it from sibling tools, none of which are encoding tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly indicates the tool is used with quote() and send(), and mentions TYPE_3 is the current standard for V2. It does not explicitly exclude other use cases or mention alternatives, but given there are no sibling encode tools, the context is sufficient for an agent to know when to invoke it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_get_chain_rpcAInspect

Get the public RPC URL for a supported chain by name. These RPC URLs are used by the server for eth_call queries against EndpointV2.

ParametersJSON Schema
NameRequiredDescriptionDefault
chain_nameYesChain name to get RPC URL for (e.g. 'ethereum', 'arbitrum', 'base')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It accurately describes a simple read operation but does not disclose any potential limitations (e.g., what happens for unsupported chains, rate limits, or authentication). The description is adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two short, clear sentences. The first sentence states the purpose directly, and the second provides additional context. Every sentence is valuable and there is no verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema) and the lack of annotations, the description is mostly complete. It could mention the return format (a string URL) for full completeness, but the current content is otherwise sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a single parameter 'chain_name' that includes examples. The description adds minimal extra meaning beyond restating that it gets an RPC URL. Per guidelines, high schema coverage gives a baseline of 3, and the description does not improve it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets a public RPC URL for a supported chain by name. It uses a specific verb 'Get' and identifies the resource 'public RPC URL'. This distinguishes it from sibling tools like lz_list_chains which list chains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that the RPC URLs are used by the server for eth_call queries against EndpointV2, providing context on when to use this tool. However, it does not specify when not to use it or mention alternative tools, so it lacks exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_get_deploymentsAInspect

Get LayerZero V2 deployment addresses across all supported chains. Returns EndpointV2, SendLib, ReceiveLib, DVN, and Executor addresses from the LayerZero Metadata API.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It does not mention side effects (none expected), authentication requirements, rate limits, or whether data is real-time or cached. For a read-only retrieval tool, this level of detail is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, well-constructed sentence that efficiently conveys the tool's purpose and return content without extraneous information. Every word is meaningful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description partially explains return values by listing address types. However, it does not specify the data structure (e.g., map of chain to addresses) or whether all chains are included. For a simple tool, it is adequate but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters with full coverage (100%). The description adds value by listing the types of addresses returned, which goes beyond the empty schema. Baseline for 0 params is 4, and the description delivers additional meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves LayerZero V2 deployment addresses across all supported chains, listing specific address types (EndpointV2, SendLib, etc.). This verb+resource combination is distinct from sibling tools like lz_get_chain_rpc or lz_get_message.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. The description implies it is for an overview of all deployments, but lacks exclusion criteria or context about alternative tools for specific addresses.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_get_messageAInspect

Get a LayerZero message by its GUID (Global Unique Identifier). The GUID is returned by EndpointV2.send() and can also be found in the Scan API message tracking response.

ParametersJSON Schema
NameRequiredDescriptionDefault
guidYesLayerZero message GUID (hex, with or without 0x prefix)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden for behavioral disclosure. It only states what the tool does, not its side effects, permissions, or return format. For a simple read operation, this is minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no unnecessary words. Front-loads the core action and immediately provides helpful context about GUID sources.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should clarify what the returned message contains. It does not mention return structure, error conditions, or data freshness. This leaves the agent uncertain about the response format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by explaining where to obtain the GUID (EndpointV2.send() or Scan API), providing context beyond the schema's type and format description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a LayerZero message by GUID, distinguishing it from siblings like lz_get_messages_by_address which lists messages by address. The verb 'Get' and specific resource 'LayerZero message' provide clear purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that the GUID comes from EndpointV2.send() or Scan API, guiding the agent on when to use this tool. While it lacks explicit when-not or alternatives, the context is clear and sufficient for correct usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_get_messages_by_addressAInspect

Get LayerZero messages for a specific wallet address. Queries the Scan API for all messages sent or received by the given address. Returns message details including status, source/destination chains, and timestamps.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of messages to return (default: 10)
addressYesWallet address to get LayerZero messages for (hex, with or without 0x prefix)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description must cover behavior. It mentions querying Scan API and return fields, but lacks details on rate limits, pagination, or error scenarios.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with no superfluous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple query tool; describes return content. No output schema but context is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and descriptions exist. The tool description adds minimal extra value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves LayerZero messages for a wallet address, queries the Scan API, and returns details. Distinguishes from siblings like lz_get_message.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage for listing all messages for an address, but no explicit when-to-use or when-not-to-use compared to siblings. No alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_list_chainsAInspect

List all 16 chains supported by this LayerZero MCP server with their Endpoint IDs (EIDs). Includes Ethereum, Arbitrum, Optimism, Polygon, BSC, Avalanche, Base, Solana, zkSync, Sei, Sonic, Berachain, Story, Monad, MegaETH, and Tron. EIDs are used in EndpointV2.quote() and EndpointV2.send() to identify destination chains.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description fully explains what the tool returns (list of chains with EIDs) and how EIDs are used. With no annotations, it carries the full burden and does so transparently, with no contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. First sentence states purpose and scope, second adds detail on EIDs and usage. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description sufficiently explains the return (list of chains with EIDs) and lists all chains. For a simple list tool, this is complete and allows correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so schema coverage is trivially 100%. The description adds value by explaining what the output contains and how it can be used, meeting the baseline for zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all 16 supported chains with their EIDs, and explicitly names each chain. It distinguishes itself from sibling tools which perform operations like quoting or sending, not listing chains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when needing supported chains or their EIDs, and mentions EIDs are used in quote/send functions. It provides clear context but does not explicitly state when not to use or suggest alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_list_dvnsAInspect

List all Decentralized Verifier Networks (DVNs) registered in the LayerZero V2 protocol. DVNs verify cross-chain messages by attesting to source chain state on the destination chain. Returns DVN names, addresses, supported chains, and configuration details.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description adequately describes a read-only list operation. No side effects or hidden behaviors are expected.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences. First sentence states action, second adds context. No wasted words, front-loaded with key purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with no parameters and no output schema. Description fully covers what the tool does and what it returns. No missing context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema is empty (0 parameters). Baseline score of 4 is appropriate as description adds no parameter info because none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it lists all DVNs, explains what DVNs are, and specifies return fields. Distinguishes from sibling tools which focus on chains, messages, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance. Implied usage for querying DVN overview, but lacks alternatives or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_oft_listAInspect

List all available OFT (Omnichain Fungible Token) deployments registered in the LayerZero Metadata API. Returns token symbols, contract addresses, supported chains, and deployment details.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that this is a read-only list operation, but does not mention pagination, default limits, or any side effects. The behavioral transparency is adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words, front-loaded with purpose and clear structure. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter list tool with no output schema, the description covers the operation and return value well. It lacks mention of potential pagination or result limits, but is otherwise complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters (input schema is empty object), so baseline is 4. The description adds value by specifying the exact contents returned (symbols, addresses, chains, deployment details), beyond just 'list OFT deployments'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'List all available OFT deployments' with a specific verb and resource, and distinguishes from sibling list tools like lz_list_chains by focusing on OFT and mentioning the LayerZero Metadata API.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, no context on prerequisites or exclusions, and does not mention any when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_oft_quoteAInspect

Quote an OFT (Omnichain Fungible Token) transfer via the LayerZero Metadata API. Returns estimated fees, exchange rates, and transfer details for bridging tokens between chains using LayerZero's OFT standard.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount to transfer (in base units as string)
dst_chainYesDestination chain name (e.g. 'arbitrum')
src_chainYesSource chain name (e.g. 'ethereum')
token_symbolYesToken symbol (e.g. 'USDC', 'USDT', 'ETH')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses that the tool is a read-only quote operation returning specific outputs, making its behavior clear. It does not mention side effects or dependencies, but as a quote tool, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that effectively communicates purpose and outputs. It is front-loaded but lacks structural elements like bullet points or separate sections.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description mentions outputs (fees, exchange rates, details) but does not elaborate on format or specifics. Given no output schema, more detail on return structure would be helpful for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters with descriptions, so the description adds no extra parameter meaning beyond confirming the purpose. The detail about 'base units' is already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool quotes OFT transfers and returns fees, exchange rates, and details, distinguishing it from generic quoting tools by specifying the OFT standard and LayerZero Metadata API.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining quotes for OFT transfers but does not explicitly state when to use this tool over siblings like lz_quote_fee or lz_transfer_quote, nor provides when-not-to-use conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_oft_sendAInspect

Build transaction calldata for an OFT (Omnichain Fungible Token) send() call on any OFT contract. OFT V2 uses uint64 amountSD (shared decimals) instead of uint256 amountLD. Returns the hex-encoded calldata for OFT.send(SendParam, MessagingFee, refundAddress). The caller must first call lz_quote_fee or lz_oft_quote to get the messaging fee, then sign and broadcast this transaction with msg.value = nativeFee.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount to send in shared decimals (amountSD, uint64 — OFT V2 uses shared decimals, not local decimals)
dst_chainYesDestination chain name (e.g. 'arbitrum', 'base', 'solana')
gas_limitNoGas limit for lzReceive on destination (default: 200000)
recipientYesRecipient address (hex, 20 bytes for EVM — will be left-padded to bytes32)
src_chainYesSource chain name (e.g. 'ethereum', 'arbitrum', 'base')
min_amountNoMinimum amount to receive in shared decimals (minAmountSD, uint64 — default: 90% of amount)
oft_addressYesOFT contract address on the source chain (hex, 20 bytes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full burden. Discloses that OFT V2 uses uint64 amountSD instead of uint256 amountLD, and that the output is hex-encoded calldata. Also notes the caller must sign and broadcast. However, no mention of auth requirements, error handling, or side effects beyond mutation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundant information. Front-loaded with action and key distinction (OFT V2 uint64). Every sentence is informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters and no output schema, the description sufficiently covers purpose, pre-requisites, parameter nuances, and return type (hex-encoded calldata). The agent can confidently use this tool with the provided context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds context: explains amountSD vs amountLD, min_amount default (90% of amount), and recipient left-padded. These specifics help the agent understand parameter semantics beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states verb 'Build transaction calldata' and resource 'OFT send() call on any OFT contract'. Distinguishes from siblings like lz_oft_quote and lz_send_message by focusing on calldata generation for OFT.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clearly instructs caller to first obtain messaging fee via lz_quote_fee or lz_oft_quote, then sign and broadcast with msg.value=nativeFee. Provides a sequential usage pattern but lacks explicit when-not-to-use or alternative tool mentions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_quote_feeAInspect

Estimate cross-chain messaging fee via LayerZero EndpointV2.quote() eth_call. Returns the native fee and LZ token fee required to send a message from the source chain to the destination endpoint. Uses the on-chain EndpointV2 contract at 0x1a44076050125825900e736c501f859c50fE728c. Selector: 0xdb9d28c6.

ParametersJSON Schema
NameRequiredDescriptionDefault
dst_eidYesDestination chain LayerZero Endpoint ID (e.g. 30110 for Arbitrum)
src_chainYesSource chain name (e.g. 'ethereum', 'arbitrum', 'base')
sender_hexYesHex-encoded sender address (20 bytes, with or without 0x prefix)
message_hexYesHex-encoded message payload (with or without 0x prefix)
options_hexNoHex-encoded LayerZero V3 options bytes (default: lzReceive with 200000 gas if omitted)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It discloses it's an eth_call (read-only, no state changes) and specifies the contract address and selector. It does not mention error handling or edge cases, but the read-only nature is well communicated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no redundant words, leading with the purpose. Efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but the description mentions return values (native fee and LZ token fee). Lacks detail on format or potential errors, but for a simple quote tool it is fairly complete given the on-chain context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good descriptions on each parameter. The description adds value by explaining the return (native fee and LZ token fee) and the underlying implementation, going beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Estimate' and resource 'cross-chain messaging fee', and references the exact contract and selector. It clearly distinguishes from sibling tools like lz_send_message or lz_oft_quote which handle sending or different quote types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage as a prerequisite to sending a message but does not explicitly state when to use this vs alternatives like lz_oft_quote or lz_stargate_quote. No exclusions or guidance on invalid inputs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_send_messageAInspect

Build transaction calldata for EndpointV2.send() to dispatch a cross-chain message via LayerZero V2. Returns the hex-encoded calldata that should be submitted as a transaction to the EndpointV2 contract on the source chain. The caller must sign and broadcast the transaction with msg.value set to the nativeFee from lz_quote_fee. Selector: 0x5e280f11.

ParametersJSON Schema
NameRequiredDescriptionDefault
dst_eidYesDestination chain LayerZero Endpoint ID
receiverYesHex-encoded receiver address as bytes32 (left-padded to 32 bytes)
src_chainYesSource chain name (e.g. 'ethereum', 'arbitrum', 'base')
message_hexYesHex-encoded message payload
options_hexNoHex-encoded LayerZero V3 options bytes (default: lzReceive with 200000 gas if omitted)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden and discloses that it returns calldata (not executing), the need for signing and broadcasting, and the contract selector. It does not contradict any annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently convey purpose, usage, and prerequisite. No unnecessary words, and the main action is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers key aspects: source chain, destination, receiver, message, options, and prerequisite fee. Does not detail return format beyond hex-encoded calldata, but that is sufficient for the tool's purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already explains all 5 parameters. The description adds minimal extra parameter-level detail beyond the schema, earning the baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool builds transaction calldata for EndpointV2.send() to dispatch a cross-chain message via LayerZero V2, and returns hex-encoded calldata. This specific verb+resource combination distinguishes it from siblings like lz_quote_fee.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context that the caller must sign and broadcast the transaction with msg.value set to nativeFee from lz_quote_fee, implying a prerequisite. However, it does not explicitly state when not to use this tool or list alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_stargate_quoteAInspect

Quote a Stargate V2 native bridge transfer fee. Calls quoteSend() on the StargatePoolNative contract (ETH, USDC, or USDT) to get the exact LayerZero messaging fee for bridging between supported EVM chains. Returns the fee in wei and the minimum amount received after slippage. Supported tokens: ETH (Ethereum/Optimism/Arbitrum/Base), USDC (6 chains), USDT (6 chains).

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYesToken symbol: 'ETH', 'USDC', or 'USDT'
amountYesAmount to bridge in base units (wei for ETH, micro-units for USDC/USDT)
dst_chainYesDestination chain name (e.g. 'base', 'ethereum', 'arbitrum')
src_chainYesSource chain name (e.g. 'optimism', 'ethereum', 'base', 'arbitrum')
wallet_addressYesSender/recipient wallet address (hex, 20 bytes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the contract call (quoteSend()), the return values (fee in wei, min amount after slippage), and constraints (supported tokens/chains). It does not explicitly state that it is read-only or safe, but given it's a quotation tool, the behavior is sufficiently transparent for an agent to understand it does not modify state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences covering purpose, mechanics, output, and constraints, with no redundant information. It is front-loaded with the key action and efficiently provides all necessary context without unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, no output schema, and no annotations, the description is fairly complete: it explains what the tool does, how it works, what it returns, and lists supported tokens/chains. It lacks details like default slippage or exact return structure, but is sufficient for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with detailed descriptions for each parameter (e.g., amount in base units). The description adds context about supported chains and tokens but does not significantly enhance understanding beyond the schema. It clarifies the return format, which is not a parameter, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Quote a Stargate V2 native bridge transfer fee' and explains it calls quoteSend() on specific contracts for ETH, USDC, or USDT. It lists supported tokens and chains, distinguishing it from sibling tools like lz_oft_quote (OFT tokens) or lz_transfer_quote (generic transfers). The verb 'quote' and resource 'Stargate V2 native bridge' are specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing to obtain a fee quotation for Stargate V2 native bridging, and lists supported tokens and chains, providing clear context. However, it does not explicitly state when not to use it (e.g., for OFT tokens) or mention alternatives like lz_oft_quote, leaving some ambiguity for agents without prior knowledge of siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_stargate_sendAInspect

Build transaction calldata for a Stargate V2 sendToken() bridge transfer. Returns hex-encoded calldata, the target contract address, and the msg.value to include. The caller must sign and broadcast the transaction. For native ETH, msg.value = nativeFee + bridgeAmount. For ERC-20 tokens (USDC/USDT), approve the pool contract first, then msg.value = nativeFee only. Supported: ETH (4 chains), USDC (6 chains), USDT (6 chains).

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYesToken symbol: 'ETH', 'USDC', or 'USDT'
amountYesAmount to bridge in base units (wei for ETH, micro-units for USDC/USDT)
dst_chainYesDestination chain name (e.g. 'base', 'ethereum', 'arbitrum')
src_chainYesSource chain name (e.g. 'optimism', 'ethereum', 'base', 'arbitrum')
slippage_bpsNoSlippage tolerance in basis points (default: 50 = 0.5%)
wallet_addressYesSender/recipient wallet address (hex, 20 bytes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description covers full behavioral burden. Discloses return values, msg.value calculation, caller responsibility, and supported assets. Does not detail failure modes or gas behavior but sufficient for core functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with clear hierarchy: purpose/output, caller rules and token-specific logic, supported chains/tokens. No unnecessary words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description explains return format and provides all necessary operational details (native vs ERC-20, approval prerequisite). Missing notes on potential reverts or exact contract interface, but covers critical aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. Description adds value by specifying units for amount, default slippage, and clarifying wallet_address role. Enhances schema without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it builds transaction calldata for Stargate V2 sendToken(), lists return components, and differentiates from siblings like lz_oft_send and lz_transfer_build by being Stargate-specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides detailed usage context: caller must sign/broadcast, different rules for native ETH vs ERC-20, and supported tokens/chains. Lacks explicit when-not-to-use or direct alternatives but gives clear preconditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_track_messageBInspect

Track a cross-chain LayerZero message by the source transaction hash. Queries the LayerZero Scan API for message status: INFLIGHT, CONFIRMING, DELIVERED, FAILED, or BLOCKED. Returns the full message details including source/destination chains, GUID, and current status.

ParametersJSON Schema
NameRequiredDescriptionDefault
tx_hashYesTransaction hash to track LayerZero message for (with or without 0x prefix)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that it queries the LayerZero Scan API and returns possible statuses (INFLIGHT, CONFIRMING, etc.) and details like source/destination chains and GUID. However, it does not mention rate limits, authentication, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: the first defines the purpose, the second adds mechanism and output details. No redundant information. Could be slightly improved by noting the required parameter, but overall very efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description adequately explains what it does and what is returned. However, it lacks details on interpreting statuses or any limitations (e.g., timeouts). Without output schema, the agent has to infer the structure from the description's mention of fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% coverage with a clear description for the single parameter (tx_hash). The description adds minimal extra context by stating 'by the source transaction hash', but this is already implied. With full schema coverage, a score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'track', the resource 'cross-chain LayerZero message', and the method 'by source transaction hash'. It distinguishes from sibling tools like lz_get_message (which likely uses a different identifier) and lz_get_messages_by_address (which filters by address).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives. It neither states when to use it nor when not to, and does not mention any prerequisites or dependencies. The sibling tools are listed but no comparison is made.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_transfer_buildAInspect

Build signable transaction steps from a Value Transfer API quote. Takes a quoteId from lz_transfer_quote and returns pre-built transaction calldata (to, data, value) that the caller can sign and broadcast. Handles approval and bridge steps automatically.

ParametersJSON Schema
NameRequiredDescriptionDefault
quote_idYesQuote ID returned from lz_transfer_quote
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the burden. It describes return format (calldata with to, data, value) and mentions handling approval and bridge steps automatically. However, it doesn't disclose potential side effects (e.g., idempotency, state changes) or permissions needed. Adequate but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff, front-loaded with the main action. Every word serves a purpose, making it efficient and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one parameter, no output schema, and clear context from siblings, the description is fairly complete. It explains input, output format, and automation. Could mention if the function is synchronous or expected return type in more detail, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter quote_id is described in the schema as 'Quote ID returned from lz_transfer_quote', and the description repeats similar information. With 100% schema coverage, the description adds no new semantic value beyond the schema, earning a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Build' and clearly specifies the resource: 'signable transaction steps from a Value Transfer API quote'. It differentiates from siblings like lz_transfer_quote, which returns a quote, and lz_transfer_status, which checks status. The purpose is unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates that input is a quoteId from lz_transfer_quote, implying a sequential usage pattern. It does not explicitly state when not to use or list alternatives, but the context of sibling tools suggests the flow. Almost clear guidance, missing explicit 'use after lz_transfer_quote'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_transfer_chainsAInspect

List all chains supported by the LayerZero Value Transfer API. Returns 130+ chains including Solana, all EVM L1s and L2s, and their chain keys for use in lz_transfer_quote.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries the full burden. It discloses the return includes 130+ chains and chain keys, but does not mention if it's read-only or any other behavioral traits. Adequate for a simple listing tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the main action, no wasted words. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description must explain return values. It does list what is returned but lacks details on structure (e.g., array vs object) and whether pagination applies. Also does not differentiate from the similar sibling 'lz_list_chains'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters and 100% schema coverage. The description does not need to add parameter details; baseline 4 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all chains supported by the LayerZero Value Transfer API and returns chain keys. However, it does not differentiate from the sibling tool 'lz_list_chains', which may cause confusion for an agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It implies usage before 'lz_transfer_quote' by mentioning chain keys for that purpose. But it does not explicitly state when to use this tool over alternatives like 'lz_list_chains', nor does it provide exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_transfer_quoteAInspect

Get a cross-chain transfer quote from the LayerZero Value Transfer API. This is the newest LayerZero API (replaces deprecated Stargate REST API) supporting 130+ chains including Solana. Returns quotes with fees, estimated times, and a quote ID for building transaction steps. Use 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE as the token address for native ETH.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount to transfer in base units (wei for ETH, smallest unit for tokens)
dst_chainYesDestination chain key (e.g. 'base', 'solana', 'ethereum')
dst_tokenYesDestination token address (use 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE for native ETH)
src_chainYesSource chain key (e.g. 'optimism', 'ethereum', 'base', 'solana')
src_tokenYesSource token address (use 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE for native ETH, or contract address for ERC-20)
dst_addressYesRecipient wallet address on the destination chain
src_addressYesSender wallet address on the source chain
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure. It indicates the tool is non-destructive (returns a quote), specifies output contents (fees, estimated times, quote ID), and notes a critical input convention (native ETH address). This provides sufficient transparency for safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is composed of 4 sentences, each serving a purpose: primary function, API context, output summary, and key input detail. It is concise and front-loaded with the main action, avoiding unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description adequately summarizes the return value (fees, estimated times, quote ID) and outlines the tool's role in a larger workflow. Given the parameter count and complexity, the description provides enough context for an agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already has 100% description coverage for all 7 required parameters, so the description adds minimal extra semantics. It reiterates the native ETH address convention and mentions base units, which is already in the schema. Baseline 3 is appropriate as the schema handles param details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it gets a cross-chain transfer quote from LayerZero API, emphasizes it's the newest API replacing the deprecated Stargate REST API, and mentions support for 130+ chains including Solana. This distinguishes it from sibling tools like lz_stargate_quote and lz_oft_quote.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states this is for cross-chain transfer quotes and hints at a workflow (get quote then use quote ID for building). It mentions replacement of deprecated Stargate API, suggesting when to prefer this over lz_stargate_quote, but does not discuss when NOT to use it or contrast with other quote tools like lz_oft_quote.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_transfer_statusAInspect

Check the status of a Value Transfer API transfer by its quote ID. Returns current transfer status, source/destination transaction hashes, and delivery progress.

ParametersJSON Schema
NameRequiredDescriptionDefault
quote_idYesQuote ID or transfer ID to check status for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It mentions return fields but does not state that the tool is read-only, idempotent, or its authorization requirements. The description is partially transparent but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no unnecessary words. It front-loads the core purpose and then specifies the returned data, achieving high conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input (one required string) and no output schema, the description covers the essential purpose and return information. However, it lacks details on status codes, error handling, and timeouts, which would make it fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds minimal value beyond the schema by saying 'by its quote ID' but does not clarify input format or constraints beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action (check status), the resource (transfer by quote ID), and the specific returned data (status, hashes, progress). This effectively distinguishes it from sibling tools like lz_transfer_quote or lz_transfer_build.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is used after initiating a transfer to get status, but it does not explicitly state when to use it versus siblings like lz_get_message or lz_track_message. No when-not-to-use or alternative guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lz_transfer_tokensAInspect

List tokens available for cross-chain transfer via the LayerZero Value Transfer API. Optionally filter by chain. Returns token addresses, symbols, decimals, and which chains they're available on.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoFilter tokens by chain key (optional, e.g. 'optimism', 'solana')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It describes a read operation with no side effects, but does not disclose potential behavior like pagination, rate limits, or required auth. Adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: first states purpose and optional filter, second lists return values. No filler words; front-loads key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple listing tool with one optional parameter and no output schema, the description adequately covers what the tool does and what it returns. Complements schema well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and already describes the chain parameter. The description merely echoes the schema, adding no new semantic information beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists tokens for cross-chain transfer, details the optional filter by chain, and enumerates return fields. This distinguishes it from siblings like lz_oft_list (likely OFT-specific) and lz_list_chains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions optional filtering by chain, giving clear usage context. However, it does not explicitly state when not to use this tool or name alternatives, though the context suggests it's a prerequisite for other transfer tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.