Skip to main content
Glama

Server Details

Chainlink MCP: CCIP, CCT pools, Data Feeds, Data Streams, VRF v2.5, PoR, Automation, Functions.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
tenzro/tenzro-network
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 20 of 20 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clear prefix (ccip_, chainlink_, ds_, por_, vrf_) indicating the product, and within each group, verbs and nouns are distinct (e.g., get_fee vs get_lanes vs send_message). No two tools appear to do the same thing.

Naming Consistency5/5

All tools follow snake_case with a product prefix and verb_noun pattern (e.g., ccip_get_fee, chainlink_check_upkeep, ds_list_feeds). No deviations or mixed conventions.

Tool Count4/5

With 20 tools covering multiple Chainlink products, the count is slightly high but still reasonable for a comprehensive integration. Each tool serves a distinct function without feeling excessive.

Completeness4/5

The tool set covers key operations for CCIP, Data Feeds, Automation, Functions, Data Streams, Proof of Reserve, and VRF. While some minor operations (e.g., token management) are missing, the core workflows are well-supported.

Available Tools

20 tools
ccip_get_feeAInspect

Estimate CCIP cross-chain messaging fee via Router.getFee() eth_call. Returns the native fee required to send a CCIP message from the source chain to the destination chain. Supports Ethereum, Base, and Arbitrum as source chains.

ParametersJSON Schema
NameRequiredDescriptionDefault
data_hexNoHex-encoded data payload to send (with or without 0x prefix). Use '0x' or '' for empty
receiverYesHex-encoded receiver address on the destination chain (with or without 0x prefix)
fee_tokenNoFee token address. Use zero address (0x0000...0000) for native gas token payment
src_chain_idYesSource chain identifier: 'ethereum', 'base', 'arbitrum', or a chain ID number
token_amountsNoToken amounts to transfer as JSON array of {token, amount} objects. Empty array for message-only
dst_chain_selectorYesDestination CCIP chain selector (uint64). E.g. 4949039107694359620 for Arbitrum
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It correctly identifies the operation as an eth_call (read-only) and specifies the return as native fee required. However, it does not disclose potential error conditions, rate limits, or the exact return format, which are important for an agent invoking the tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two concise sentences with no extraneous details. Every sentence adds value: first states purpose and method, second specifies return and supported chains. Well-structured and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters (3 required) and no output schema, the description covers high-level purpose but does not specify the return format (e.g., decimal string, hex). It also does not explain how parameters like fee_token affect the estimate. The sibling tools are not referenced for differentiation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds high-level context (supported chains, native fee) but does not add meaning beyond what the schema already provides for individual parameters. It repeats the schema's intent without deeper clarification.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool estimates a CCIP cross-chain messaging fee, specifying the method (Router.getFee() eth_call) and the resource (native fee for sending a CCIP message). It also explicitly lists supported source chains (Ethereum, Base, Arbitrum), distinguishing it from sibling tools like ccip_send_message.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for fee estimation before sending a message but provides no explicit guidance on when to use this tool versus siblings (e.g., when to call this versus ccip_send_message directly). No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ccip_get_lanesAInspect

Get available CCIP lanes (source-destination chain pairs) from the Chainlink REST API. Optionally filter by source or destination chain selector.

ParametersJSON Schema
NameRequiredDescriptionDefault
environmentNoEnvironment: 'mainnet' or 'testnet'. Defaults to 'mainnet'
dest_chain_selectorNoOptional destination chain selector to filter lanes
source_chain_selectorNoOptional source chain selector to filter lanes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It implies a read-only GET operation but does not disclose response size, pagination, rate limits, or error handling. The behavioral disclosure is minimal beyond the obvious read action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences convey the core purpose and optionality efficiently. No redundant words; front-loaded with the main action. Every sentence contributes necessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with 3 optional params and no output schema. While the description covers the basic purpose, it omits what the response contains (e.g., list of lane objects) and whether the tool is purely read-only. Given the low complexity, some additional context would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (all three parameters described). The tool description adds no additional meaning beyond repeating that filters are optional; it does not explain chain selectors or provide examples. Baseline 3 is appropriate as schema does the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'available CCIP lanes' and mentions optional source/destination chain selector filtering. The verb 'Get' and resource 'CCIP lanes' are specific and differentiate it from sibling tools like ccip_get_fee or ccip_get_supported_chains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like ccip_get_supported_chains or ccip_get_fee. The description does not specify prerequisites, typical use cases, or when filtering is beneficial, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ccip_get_rate_limitsAInspect

Get CCIP Token Pool rate limiter configuration for a specific remote chain. Returns inbound and outbound rate limits (tokens per second, capacity) that control the maximum cross-chain transfer throughput. Part of CCIP v1.6+ security model.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain the pool is deployed on: 'ethereum', 'base', 'arbitrum'. Defaults to 'ethereum'
pool_addressYesToken pool contract address (hex with 0x prefix)
remote_chain_selectorYesRemote chain selector (uint64) to query rate limits for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description clearly indicates a read-only operation ('Get'), and specifies the returned data (inbound/outbound rate limits). It does not mention side effects, which is appropriate for a read. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first states the core purpose, the second adds output details and context. Every sentence adds value, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains the return values (rate limits with units and capacity). It lacks error handling or default behavior for the optional chain parameter, but overall is sufficient for a simple read tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions. The description adds little beyond the schema, only noting the remote chain context. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the specific resource 'CCIP Token Pool rate limiter configuration for a specific remote chain'. It distinguishes from sibling tools by focusing on rate limits and mentioning the security model context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning it is part of CCIP v1.6+ security model, but does not explicitly state when to use this tool over siblings or provide any exclusion guidelines.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ccip_get_supported_chainsAInspect

Get supported chains for Chainlink CCIP from the Chainlink REST API. Returns chain names, selectors, and network details.

ParametersJSON Schema
NameRequiredDescriptionDefault
environmentNoEnvironment: 'mainnet' or 'testnet'. Defaults to 'mainnet'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must fully disclose behavior. It states the tool retrieves data (non-destructive) and returns specific fields, but lacks details on authentication, rate limits, or any side effects. This is adequate for a simple read tool but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at two sentences, front-loading the primary purpose and output. Every word serves a purpose with no redundancy or unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one optional parameter, no output schema), the description covers the essential behavior and return values. It lacks output structure details but is sufficient for an agent to understand what the tool does and what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a well-described parameter. The description does not add any additional meaning or context beyond the schema's parameter description, so it meets the baseline but does not exceed it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves supported chains for Chainlink CCIP from the REST API, specifying the output includes chain names, selectors, and network details. This verb+resource+result combination is distinct from sibling tools like ccip_get_lanes or ccip_get_supported_tokens, which focus on other aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for fetching chain metadata from an external API but offers no explicit guidance on when to use it over alternatives or when to avoid it. No exclusions, prerequisites, or contextual hints are provided, leaving the agent to infer usage from the tool's name and purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ccip_get_supported_tokensAInspect

Get supported tokens for Chainlink CCIP from the Chainlink REST API. Returns token addresses, symbols, and supported lanes.

ParametersJSON Schema
NameRequiredDescriptionDefault
environmentNoEnvironment: 'mainnet' or 'testnet'. Defaults to 'mainnet'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It mentions the return content but does not indicate whether the operation is read-only, requires authentication, or has rate limits. The tool likely has no side effects, but this is not stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no extraneous words. It efficiently conveys the tool's purpose and output without unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single optional parameter, no output schema), the description is fairly complete. It specifies what is returned, but could include additional context such as typical use cases (e.g., before sending a CCIP message) or whether the token list is dynamic.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter 'environment', which has a clear description. The tool description adds no additional meaning beyond what the schema already provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves supported tokens for Chainlink CCIP and specifies the returned data (addresses, symbols, lanes). It uses a specific verb 'Get' and resource 'supported tokens', and the tool name itself is descriptive, distinguishing it from sibling CCIP tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining token information but lacks explicit guidance on when to use this tool versus alternatives like ccip_get_fee or ccip_get_lanes. There are no usage scenarios or prerequisites mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ccip_get_token_poolBInspect

Get information about a CCIP Token Pool contract. Returns the pool type (Lock/Release or Burn/Mint), the token address, supported remote chains, and rate limiter config. Token Pools are part of the Cross-Chain Token (CCT) standard in CCIP v1.6+.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: 'ethereum', 'base', 'arbitrum'. Defaults to 'ethereum'
pool_addressYesToken pool contract address (hex with 0x prefix)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates a read-only operation ('Get information') without side effects, but with no annotations provided, it lacks disclosure of potential errors (e.g., invalid address) or authorization requirements. It does not contradict any annotations, as none exist, but fails to add significant behavioral context beyond the return fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: first sentence states purpose and lists outputs, second adds version context. It is front-loaded, efficient, and contains no redundant information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description appropriately lists the returned fields. It covers the essential information for a query tool. However, it does not mention error handling (e.g., what happens for a non-existent pool) or any constraints, which would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear definitions for 'chain' (including defaults) and 'pool_address' (hex format). The description adds context about the return values, which helps infer parameter purpose, but does not enhance parameter semantics beyond the schema. Baseline of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves information about a CCIP Token Pool contract and lists the specific data returned (pool type, token address, supported chains, rate limiter). It is specific about the resource and verb, but does not explicitly differentiate from sibling tools like ccip_get_supported_tokens or ccip_get_rate_limits, which have overlapping domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., when to use ccip_get_token_pool vs ccip_get_supported_tokens). The description only states what it does, not the context or prerequisites. An agent would have to infer usage from the tool name and siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ccip_send_messageBInspect

Send a CCIP cross-chain message via Router.ccipSend(). Submits a signed transaction to the source chain's CCIP Router to send a message and/or tokens to the destination chain. Returns the transaction hash.

ParametersJSON Schema
NameRequiredDescriptionDefault
data_hexNoHex-encoded data payload
receiverYesHex-encoded receiver address on the destination chain
fee_tokenNoFee token address (zero address for native). Defaults to native
gas_limitNoGas limit for execution on destination chain (default: 200000)
sender_keyYesHex-encoded sender private key for signing the transaction
src_chain_idYesSource chain: 'ethereum', 'base', 'arbitrum', or chain ID
token_amountsNoToken amounts to transfer as JSON array of {token, amount} objects
dst_chain_selectorYesDestination CCIP chain selector (uint64)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description indicates a write operation (submits a signed transaction) but lacks details on side effects, required permissions, gas costs, or potential reverts. With no annotations, the description should provide more behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no wasted words. Front-loaded with the core action, followed by mechanism and return value. Appropriate length for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema. Description lacks context on prerequisites (e.g., funding), transaction confirmation, error handling, or how to interpret the hash. For a cross-chain message tool with 8 parameters, more completeness would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for all parameters. The description adds no extra meaning beyond the schema, which is adequate given high coverage, but does not enhance understanding of complex parameters like sender_key.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool sends a CCIP cross-chain message via a specific function (Router.ccipSend()), with a verb ('Send') and resource ('CCIP cross-chain message'), and distinguishes it from read-only sibling tools like ccip_get_fee.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, no preconditions or exclusions mentioned. Siblings include other ccip_ and chainlink_ tools, but the description does not differentiate usage contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ccip_track_messageAInspect

Track the execution status of a CCIP cross-chain message on the destination chain. Calls OffRamp.getExecutionState() to check message delivery status. States: 0=UNTOUCHED (not yet processed), 1=IN_PROGRESS (being executed), 2=SUCCESS (delivered), 3=FAILURE (execution failed).

ParametersJSON Schema
NameRequiredDescriptionDefault
message_idYesCCIP message ID (64-byte hex, with or without 0x prefix)
dst_chain_idYesDestination chain: 'ethereum', 'base', 'arbitrum', or chain ID
offramp_addressYesOffRamp contract address on the destination chain (hex with 0x prefix)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the underlying contract call (OffRamp.getExecutionState()) and lists possible states, which is adequate. However, it does not explicitly state that the operation is read-only or mention any required permissions or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, with three sentences covering purpose, method, and state explanation. Every sentence adds value, and the most important information (what the tool does) is first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple status-checking tool with three parameters and no output schema, the description provides essential context: the underlying call and state codes. It could mention error handling or network-specific nuances, but overall it is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so all parameters are described in the schema. The description adds no additional parameter-level information. It does explain the state codes, which are output-related, not parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to track the execution status of a CCIP cross-chain message. It specifies the action (track), resource (execution status), and context (destination chain). This distinguishes it from sibling tools like ccip_get_fee or ccip_send_message.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (after sending a message to check its status) but lacks explicit guidance on when not to use it or alternatives. It doesn't mention other tracking methods, though none exist among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ds_get_reportAInspect

Get a Data Streams report for a specific feed ID. Data Streams provide sub-second, low-latency market data for crypto, forex, equities, and commodities. Returns benchmarkPrice, bid, ask, timestamps, and fee info. Common feed IDs: ETH/USD = 0x000359843a543ee2fe414dc14c7e7920ef10f4372990b79d6361cdc0dd1ba782, BTC/USD = 0x00037da06d56d083fe599397a4769a042d63aa73dc4ef57709d31e9971a5b439.

ParametersJSON Schema
NameRequiredDescriptionDefault
feed_idYesData Streams feed ID (hex string, e.g. '0x000359843a543ee2fe414dc14c7e7920ef10f4372990b79d6361cdc0dd1ba782' for ETH/USD)
timestampNoUnix timestamp to query (optional — latest if omitted)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It mentions the tool returns data but does not discuss side effects, authentication requirements, rate limits, or potential failure modes. The read-only nature is implied but not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with four sentences, each serving a distinct purpose: tool action, context, return values, and examples. It is front-loaded with the key verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately describes the return fields. It provides example feed IDs for common use cases. However, it does not cover error handling or authentication, which are less critical for a read-only tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already has 100% description coverage, but the description adds value by listing common feed IDs and explaining the return fields (e.g., benchmarkPrice, bid, ask), which goes beyond the schema's parameter-level documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets a Data Streams report for a specific feed ID, specifying the returned fields (benchmarkPrice, bid, ask, timestamps, fee info) and providing example feed IDs. This differentiates it from the sibling ds_list_feeds tool, which lists feeds.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage requires a known feed ID and offers common feed IDs as examples, but it does not explicitly state when to use this tool versus alternatives like ds_list_feeds, nor does it provide prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ds_list_feedsAInspect

List available Chainlink Data Streams feeds. Returns feed IDs, pairs, and asset classes (crypto, forex, equities, commodities). Data Streams provide sub-second latency market data — distinct from the slower on-chain Data Feeds.

ParametersJSON Schema
NameRequiredDescriptionDefault
asset_classNoFilter by asset class: 'crypto', 'forex', 'equities', 'commodities' (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It states the tool returns feed IDs, pairs, and asset classes, and is a list operation. However, it does not disclose any side effects, authentication needs, rate limits, or potential size limits of the response. For a read-only list tool, this is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the purpose and return values, followed by a concise distinction from the sibling tool. Every word earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one optional parameter and no output schema, the description adequately explains the functionality, return content, and differentiation from a similar tool. It could mention potential response size or pagination, but for its simplicity, it is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single optional parameter 'asset_class'. The tool description reinforces the meaning by listing the allowed asset classes (crypto, forex, equities, commodities), but adds no new information beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool lists available Chainlink Data Streams feeds, returns feed IDs, pairs, and asset classes, and distinguishes itself from the slower on-chain Data Feeds. This satisfies a specific verb+resource and differentiates it from the sibling 'chainlink_list_feeds'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly contrasts Data Streams (sub-second latency) with Data Feeds (slower on-chain), providing guidance on when to use this tool versus the sibling. It also mentions the optional asset class filter. However, it does not explicitly state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

por_get_reserveAInspect

Read a Chainlink Proof of Reserve feed to verify asset reserves onchain. Uses the same AggregatorV3Interface as price feeds but returns reserve amounts instead of prices. Well-known PoR feeds on Ethereum: WBTC = 0xa81FE04086865e63E12dD3776978E49DEEa2ea4e, USDC = 0x9a177Bb065A0636C7972C6D27Abcd4B1e5EDb65c, TUSD = 0x478f4c42b877c697C4b19E396865D5437Ef4E08B.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: 'ethereum'. Defaults to 'ethereum'
feed_addressYesProof of Reserve feed contract address (hex). Well-known: WBTC=0xa81FE04086865e63E12dD3776978E49DEEa2ea4e, USDC=0x9a177Bb065A0636C7972C6D27Abcd4B1e5EDb65c
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It reveals the tool is read-only and returns reserve amounts, but does not mention error handling, permissions, or rate limits. For a simple read operation, this is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences: purpose, differentiation from price feeds, and specific addresses. No unnecessary words, and the key information is front-loaded. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (two parameters, no output schema), the description covers purpose, usage, and examples. It could mention the return format or typical units, but the sibling tools (including chainlink_get_price) provide context, making it sufficiently complete for selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions for both parameters. The description adds context about well-known feed addresses and the AggregatorV3Interface interface, but does not substantially enhance parameter understanding beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reads a Chainlink Proof of Reserve feed to verify asset reserves. It distinguishes from sibling price feed tools by explicitly noting it returns reserve amounts instead of prices, and provides specific well-known contract addresses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the tool uses the same interface as price feeds but returns reserves, guiding when to use this over price feed alternatives. It provides well-known feed addresses for common cases, but could be more explicit about when not to use it or other alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

por_list_feedsAInspect

List well-known Chainlink Proof of Reserve feeds. Returns feed addresses, asset names, and descriptions for verifying reserve backing of wrapped/synthetic assets.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description outlines what the tool returns (feed addresses, names, descriptions) but lacks information on side effects, authentication requirements, rate limits, or read-only nature. With no annotations, the agent has limited behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the main action ('List well-known Chainlink Proof of Reserve feeds'). Every word serves a purpose with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with no parameters and no output schema, the description covers purpose and return values adequately. However, it could include details like whether the list is exhaustive or if special permissions are needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, and schema description coverage is 100% (trivially). The description adds meaning by specifying the return fields and the purpose (reserve backing verification), providing value beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists Chainlink Proof of Reserve feeds with specific return fields (addresses, asset names, descriptions). It distinguishes from siblings like chainlink_list_feeds by specifying 'Proof of Reserve', indicating a different domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for verifying reserve backing but does not explicitly state when to use this tool over siblings (e.g., chainlink_list_feeds or ds_list_feeds). No when-not-to-use or alternative guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vrf_get_subscriptionBInspect

Get VRF v2.5 subscription details from the VRFCoordinatorV2_5 contract. Returns balance, owner, authorized consumers, and pending requests. Supports Ethereum, Arbitrum, and Base.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: 'ethereum', 'arbitrum', 'base'. Defaults to 'ethereum'
subscription_idYesVRF subscription ID (uint256 as decimal string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose behavioral traits beyond the read operation. It lacks details on side effects, authentication needs, rate limits, or prerequisites.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. Front-loaded with the action and resource, immediately stating purpose and return fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

A simple get operation for which the description covers purpose, return fields, and supported chains. It does not specify the contract address or network requirements, but for a read tool with a clear name, the completeness is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents both parameters. The description adds the supported chains as a list, but this is implicit from the schema's description. The description does not add new meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it gets VRF v2.5 subscription details and lists specific return fields. It differentiates from siblings like chainlink_get_subscription and vrf_request_random through the version and purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives such as chainlink_get_subscription (for other VRF versions) or vrf_request_random (for requesting randomness). The description does not provide when-not or explicit alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vrf_request_randomAInspect

Build transaction calldata for a VRF v2.5 random words request. Returns the hex-encoded calldata for VRFCoordinatorV2_5.requestRandomWords(). The caller must sign and submit the transaction from a consumer contract. VRF v2.5 supports payment in LINK or native token.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: 'ethereum', 'arbitrum', 'base'. Defaults to 'ethereum'
key_hashYesVRF key hash for the gas lane (hex, 32 bytes)
num_wordsNoNumber of random words to request (default: 1, max: 500)
native_paymentNoPay in native token instead of LINK (default: false)
subscription_idYesVRF subscription ID (uint256 as decimal string)
callback_gas_limitNoCallback gas limit (default: 100000)
request_confirmationsNoNumber of block confirmations before fulfillment (default: 3)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses that it only builds calldata, not sending the transaction, and outlines payment options. Without annotations, it covers key behavioral traits but omits error conditions or success indicators.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise three sentences with front-loaded purpose, then return format and payment details. Efficient but could benefit from clearer sectioning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers return value and caller responsibility adequately. No output schema, so it appropriately describes the hex-encoded calldata. Could reference external docs for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are well-described in the schema. The description adds minor context (e.g., VRF v2.5, payment options) but does not substantially deepen understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it builds transaction calldata for VRF v2.5 random words request, specifying the exact contract function and output format. Distinct from sibling tools which handle CCIP, Chainlink upkeep, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides essential usage context: caller must sign and submit transaction from a consumer contract, and supports LINK or native payment. Lacks explicit comparison to alternative approaches or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.