Skip to main content
Glama

Tenzro Solana MCP

Server Details

Solana MCP: Jupiter swaps, SPL transfers, Metaplex NFTs, Bonfida SNS, slot/TPS, staking.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
tenzro/tenzro-network
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 14 of 14 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose, such as fetching balance, NFTs, price, TPS, transaction details, yield, etc. Even the two NFT-related tools differ: one retrieves single NFT metadata, the other retrieves all NFTs by owner.

Naming Consistency5/5

All tools follow the consistent pattern 'solana_<verb>_<noun>', using snake_case. Verbs are mainly 'get', with a few action verbs like 'resolve', 'stake', 'swap', 'transfer', maintaining uniformity.

Tool Count5/5

With 14 tools, the server covers essential Solana operations—balance, NFTs, tokens, transactions, TPS, slot, price, yield, and instruction building for stake, swap, and transfer—without being bloated.

Completeness4/5

The tool set provides comprehensive read operations and instruction generation for common actions. It lacks actual transaction execution and some advanced operations like token creation, but this is a reasonable scope for an informational/instruction MCP server.

Available Tools

14 tools
solana_get_balanceAInspect

Get the native SOL balance of a Solana wallet address

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesSolana wallet address (base58-encoded public key)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the basic read operation but omits behavioral details such as output format (lamports vs SOL), rate limits, or network specificity. This is minimally adequate given the tool's simplicity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. It is front-loaded with the key action and resource, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with full schema coverage, the description is mostly complete. However, it lacks details on the return value format (e.g., lamports or SOL), which could lead to ambiguity. Given the absence of an output schema, this is a notable gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter 'address' is already well-described in the schema. The description adds no additional meaning beyond what the schema provides, meeting the baseline but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'Get', the resource 'native SOL balance', and the specific entity 'Solana wallet address'. It distinguishes from sibling tools like solana_get_nft or solana_get_token_accounts, making the tool's purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description effectively implies when to use this tool (when needing the native SOL balance of a wallet), but lacks explicit guidance on when not to use it or how it compares to alternatives. However, given the distinct sibling tools, the usage context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_get_nftAInspect

Get NFT metadata using the Metaplex DAS (Digital Asset Standard) API. Requires a Helius API key set via HELIUS_API_KEY env var, or returns the call pattern for manual use.

ParametersJSON Schema
NameRequiredDescriptionDefault
mint_addressYesNFT mint address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses authentication requirements and conditional behavior (returns call pattern without key). It hints at read-only nature by not mentioning mutations, though it could explicitly state idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with purpose, no redundant information. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers authentication, conditional behavior, and API source. Missing return format details but adequate for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already fully describes the parameter (mint_address with description 'NFT mint address'). The description adds no extra meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool gets NFT metadata using the Metaplex DAS API. The verb 'Get' and resource 'NFT metadata' are specific, and it distinguishes from siblings like solana_get_nfts_by_owner which fetches multiple NFTs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear prerequisites (Helius API key via env var) and fallback behavior (returns call pattern if key missing). Does not explicitly mention when to use this over alternative NFT tools, but the context is implied by the tool name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_get_nfts_by_ownerAInspect

Get all NFTs (compressed and standard) owned by a Solana address using the Metaplex DAS API. Requires HELIUS_API_KEY env var.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (default 1)
limitNoMaximum number of results (default 20, max 1000)
owner_addressYesOwner wallet address (base58-encoded)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full burden. It mentions the API dependency and env var requirement, but does not disclose read-only nature, rate limits, or pagination behavior. Basic but not deficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. Purpose and key requirement are front-loaded. Efficient and scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet the description does not mention return format. Pagination parameters are present but not explained in the description. Adequate but leaves gaps for a paginated list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for all three parameters (page, limit, owner_address). The tool description adds no extra detail beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves all NFTs (compressed and standard) owned by a Solana address via the Metaplex DAS API. It distinguishes from siblings like 'solana_get_nft' which likely fetches a single NFT.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives. The requirement for HELIUS_API_KEY is mentioned but no conditions or exclusions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_get_priceAInspect

Get the current USD price of a Solana token from Jupiter Price API v3. Requires the token mint address (e.g. 'So11111111111111111111111111111111111111112' for SOL).

ParametersJSON Schema
NameRequiredDescriptionDefault
token_idYesToken mint address (e.g. 'So11111111111111111111111111111111111111112' for SOL, 'EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v' for USDC)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the external API (Jupiter) and is read-only, but lacks details on rate limits, errors, or response behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, well-structured sentence that front-loads the purpose and provides key details efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool with no output schema, the description is adequately complete, covering purpose, input, and an example.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds value by providing example mint addresses and clarifying the parameter's nature (token mint address), though it does not extend beyond the schema significantly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool gets the current USD price of a Solana token using Jupiter Price API v3, differentiating from sibling tools like solana_get_balance or solana_get_token_info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly indicates when to use (to fetch USD price) and what input is required (token mint address). It does not explicitly state when not to use, but the context makes it clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_get_slotAInspect

Get the current slot height of the Solana network

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description solely bears the burden of behavioral disclosure. It merely states the function without mentioning implications, return format, or consistency guarantees for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, front-loaded sentence with no extraneous words. Every part earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with no parameters and no output schema, the description adequately specifies the return (current slot height). However, it lacks full context like data type or freshness, but given simplicity, it is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so the input schema fully covers it. Baseline 4 applies as the description adds no parameter details, but none are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action ('Get') and the resource ('current slot height of the Solana network'), making it distinct from sibling tools like solana_get_tps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives (e.g., when to choose slot height over TPS). The description lacks any contextual or comparative advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_get_token_accountsAInspect

Get all SPL token accounts owned by a Solana wallet address, including balances and mint addresses

ParametersJSON Schema
NameRequiredDescriptionDefault
owner_addressYesOwner wallet address (base58-encoded public key)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavioral traits. It states the tool returns balances and mint addresses, implying read-only operation. However, it does not mention any potential side effects, permissions, or error conditions. Given the simplicity, it is adequate but not exceptional.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is front-loaded with the core purpose. No wasted words; every part adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (1 parameter, no output schema), the description is complete. It covers what the tool does and the returned data. No obvious gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter (owner_address) is fully described in the schema (100% coverage). The description adds no additional meaning beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (get token accounts), the resource (owned by a wallet address), and includes additional details (balances and mint addresses). It distinguishes from sibling tools like solana_get_balance which get SOL balance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use: when you need all token accounts for a wallet. It doesn't explicitly exclude alternatives, but sibling tools are distinct enough that confusion is unlikely.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_get_token_infoBInspect

Get metadata for an SPL token by its mint address, including name, symbol, decimals, and logo from the Jupiter token list

ParametersJSON Schema
NameRequiredDescriptionDefault
mint_addressYesSPL token mint address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description identifies data source (Jupiter token list) but discloses no other behavioral traits (e.g., read-only nature, authorization needs, rate limits, or potential side effects). For a tool with no annotations, it carries insufficient disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 20 words, front-loaded with key verb and resource. No redundant phrases; every word adds value. Efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool with no output schema, the description covers purpose, key input, and explicitly lists return fields (name, symbol, decimals, logo). Adequately complete for this low-complexity tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with mint_address described as 'SPL token mint address'. The description adds no further semantics beyond echoing the schema. Baseline score of 3 is appropriate as schema already defines the parameter sufficiently.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves metadata for an SPL token by mint address, listing specific fields (name, symbol, decimals, logo). It uniquely identifies its purpose among siblings like solana_get_balance and solana_get_nft.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. No mention of prerequisites, exclusions, or contextual scenarios. The description assumes the agent knows to use it for token metadata, but doesn't differentiate from other Solana tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_get_tpsAInspect

Get the current transactions per second (TPS) on the Solana network by sampling recent performance data

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully bears the burden of behavioral disclosure. It states that the tool 'samples recent performance data' to get TPS, indicating a read-only, non-destructive operation. This adequately informs the agent of the tool's behavior, though additional details like caching or rate limits are not mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, focused sentence with no extraneous words. Every part contributes to understanding the tool's purpose and behavior. It is optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, no output schema), the description is nearly complete. It explains what the tool does and how it works ('by sampling recent performance data'). A minor improvement would be to specify whether the value is an instantaneous measure or an average, but it's sufficient for an AI agent to understand the tool's function.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so schema coverage is 100%. Per the guidelines, a baseline score of 4 is appropriate since no parameter documentation is needed. The description does not need to add parameter meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get', the resource 'current transactions per second (TPS)', and the specific context 'on the Solana network by sampling recent performance data'. This accurately conveys the tool's function and distinguishes it from sibling tools like solana_get_balance or solana_get_slot.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description is self-contained and clearly indicates the tool is for retrieving TPS, which implies when to use it. However, it does not explicitly mention when not to use it or provide alternative tools for related tasks. Since the sibling tools are distinct in purpose, the lack of explicit guidelines is mitigated but still leaves room for improvement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_get_transactionAInspect

Get details of a Solana transaction by its signature, including status, block time, and instructions

ParametersJSON Schema
NameRequiredDescriptionDefault
signatureYesTransaction signature (base58-encoded)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It lists some return fields but does not mention read-only nature, error conditions, or edge cases like pending vs confirmed transactions, which a user would need to know.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that immediately states the primary action and key input. No wasted words; front-loaded with the verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given only one simple parameter and no output schema, the description sufficiently covers what the tool does and the expected return content. Minor gap: does not specify if the transaction must be finalized or if pending ones are included.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter 'signature' is fully described in the schema (100% coverage). The description adds no extra semantic value beyond stating 'by its signature', so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it gets transaction details by signature, listing specific fields (status, block time, instructions). This distinguishes it from sibling tools like solana_get_balance or solana_get_slot which retrieve different data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No conditions, prerequisites, or exclusions are provided, leaving the agent to infer from the description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_get_yieldAInspect

Get current yield/APY information for Solana DeFi staking protocols including Marinade, Jito, BlazeStake, and native staking

ParametersJSON Schema
NameRequiredDescriptionDefault
protocolNoOptional protocol filter: 'marinade', 'jito', 'blaze', 'lido', or omit for all
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It indicates a read operation ('Get'), which is non-destructive, but fails to disclose any additional behavioral traits such as authentication requirements, rate limits, or error handling. The simplicity of the tool partly mitigates this gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy. Essential information is front-loaded: the action (Get), resource (yield/APY), and context (Solana staking protocols). Every word contributes to clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one optional parameter and no output schema, the description adequately covers the purpose and protocol filter. It lacks details on return format or edge cases, but given the low complexity, this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description provides examples (Marinade, Jito, BlazeStake, native staking) that align with the schema's documented values. This adds marginal value beyond the schema, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves yield/APY for Solana staking protocols, listing specific examples. This distinguishes it from sibling tools like solana_get_balance or solana_get_price, which serve different purposes. The verb 'Get' and resource 'yield/APY' are specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by naming specific staking protocols, but does not explicitly state when to use this tool over siblings. No alternatives or exclusions are mentioned, though the sibling list provides implicit differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_resolve_domainAInspect

Resolve a .sol (Solana Name Service) domain to its owner wallet address using the Bonfida SNS proxy

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesSNS domain to resolve (e.g. 'toly.sol')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of disclosing behavioral traits. It states the operation is a lookup, but does not mention potential errors, network dependencies, or success/failure conditions, nor does it confirm idempotency or side-effects. The description is insufficient for a tool without annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-constructed sentence that conveys all essential information without extraneous words. Every part serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description adequately explains the input (domain) and output (owner wallet address). It lacks details on error handling or address format, but is largely complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description does not need to add much parameter context. The description reinforces the domain format ('.sol') and service, but the schema already provides a clear example. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Resolve'), the resource ('.sol domain'), and the result ('owner wallet address'). It also specifies the service ('Bonfida SNS proxy'), making the tool's purpose unambiguous and distinguishable from sibling tools like solana_get_balance or solana_transfer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage (when you need to resolve a .sol domain) but does not explicitly state when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. It is adequate but lacks explicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_stakeAInspect

Get instructions for staking SOL with a validator. Returns the step-by-step process (does not execute the transaction).

ParametersJSON Schema
NameRequiredDescriptionDefault
amount_solYesAmount of SOL to stake (e.g. 1.5 for 1.5 SOL)
validator_addressYesValidator vote account address to delegate to
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description discloses it returns instructions without executing, which is the key behavior. However, it lacks details on return format or any prerequisites.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with all necessary information, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple nature (2 params, no output schema), the description adequately explains the tool's purpose and non-execution nature. Could mention return format but still sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100%, so the schema already explains both parameters. The description adds no extra meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get instructions for staking SOL') and the resource ('with a validator'), and distinguishes from siblings that execute transactions (e.g., solana_transfer).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes the tool does not execute the transaction, implying it's for planning/instructions vs. execution, but does not explicitly list when to use or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_swapAInspect

Get a swap quote from Jupiter aggregator for trading between two Solana tokens. Returns route, price impact, and estimated output amount.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount in smallest unit (lamports for SOL, base units for SPL tokens)
input_mintYesInput token mint address (e.g. 'So11111111111111111111111111111111111111112' for SOL)
output_mintYesOutput token mint address (e.g. 'EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v' for USDC)
slippage_bpsNoSlippage tolerance in basis points (default 50 = 0.5%)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses returns (route, price impact, estimated output) but misses critical behavioral trait that it does not execute the swap. With no annotations, description does not fully compensate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two focused sentences with no wasted words. Information is front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes returns adequately given no output schema. But omits that this is only a quote, not an execution, which is a key contextual gap for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. Description adds 'Jupiter aggregator' context but little extra beyond schema, baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it gets a swap quote from Jupiter aggregator for trading between two Solana tokens, and specifies outputs. Distinct from siblings like solana_transfer and solana_get_price.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives, no prerequisites or conditions mentioned. Lacks context that this is a quote, not an execution.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_transferAInspect

Get instructions for transferring SOL between two addresses. Returns the transaction structure (does not execute).

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesRecipient wallet address (base58-encoded)
fromYesSender wallet address (base58-encoded)
amount_lamportsYesAmount in lamports (1 SOL = 1_000_000_000 lamports)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the non-execution behavior, but fails to mention error conditions (e.g., insufficient balance), authentication needs, or side effects (e.g., fee deduction).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, no fluff. Key information (purpose and execution flag) is front-loaded. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool that returns transaction instructions without executing, the description covers the core behavior and return type. However, it lacks details on possible errors (e.g., invalid addresses, insufficient balance) and whether it requires a connected wallet.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers all three parameters with descriptions, achieving 100% coverage. The description adds no extra meaning beyond what the schema already provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool generates instructions for SOL transfers between two addresses, and explicitly notes it returns the transaction structure without executing it. This distinguishes it from execution tools like solana_swap or solana_stake.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance, nor mention of prerequisites like wallet connection or balance checks. The 'does not execute' note implies usage in transaction building flows, but alternatives are not discussed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.