Skip to main content
Glama

Server Details

SolanaOracle - 12 Solana tools: SPL tokens, Jupiter, validators, rent, programs, NFTs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/solanaoracle
GitHub Stars
0
Server Listing
SolanaOracle

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 12 of 12 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of the Solana ecosystem (network stats, DeFi protocols, DEX volume, token risk, wallet analysis, etc.) with no significant overlap in purpose.

Naming Consistency5/5

All tool names follow a consistent pattern: 'sol_' prefix followed by a descriptive noun phrase using underscores, e.g., sol_bridge_flows, sol_defi_yields, sol_network_stats.

Tool Count5/5

With 12 tools, the server is well-scoped for a comprehensive Solana monitoring service, covering major areas without being excessive or too sparse.

Completeness4/5

The tool set covers core monitoring needs (network, DeFi, tokens, wallets), but lacks direct support for transaction details or specific program interactions, which are minor gaps.

Available Tools

12 tools
sol_bridge_flowsAInspect

Bridge deposit/withdrawal flow monitoring for Solana

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It only states 'monitoring' without disclosing behavioral traits such as read-only nature, data freshness, time range, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no extraneous words, efficiently conveying the tool's core function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no parameter details, the description is too brief. An agent would need more information about what data the tool returns (e.g., flow amounts, time periods, addresses).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, so the input schema is complete (100% coverage). The description adds meaning by explaining the tool's purpose beyond the empty schema. Baseline for no parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb+resource: 'monitoring' of 'bridge deposit/withdrawal flow' on Solana. This clearly distinguishes it from sibling tools like sol_dex_volume (DEX volume) and sol_defi_yields (yields).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for monitoring bridge flows, but provides no explicit guidance on when to use it versus alternatives, nor any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_defi_yieldsBInspect

Compare DeFi yields across all Solana protocols. Filter by TVL and category.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoFilter keyword (e.g., stablecoin, sol, jito)
min_tvl_usdNoMinimum pool TVL (default: 100000)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It does not state whether the tool is read-only, destructive, or describe side effects, rate limits, or data freshness. The description is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently communicates the tool's purpose and key features. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is no output schema, the description should elaborate on return values or expected output format. It only says 'compare yields' without specifying what fields or structure the response contains, leaving a significant gap for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100%, so baseline is 3. The description mentions 'Filter by TVL and category' which restates the parameter names but adds no new semantic detail beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Compare DeFi yields') and the resource ('all Solana protocols'). It distinguishes this tool from siblings like sol_bridge_flows or sol_dex_volume by specifying the domain (DeFi yields).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for yield comparison across Solana protocols, but it does not explicitly state when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_dex_volumeAInspect

DEX volume and liquidity across 72 Solana exchanges. $1.6B+ daily volume.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context (scale and scope) but lacks details like update frequency, data recency, or whether results are aggregate or per-exchange. No annotations exist to supplement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with no redundant words, front-loading the purpose and key metric.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple no-parameter tool, the description covers the essential domain and scale. Could be improved by mentioning data format or update cadence, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, the schema provides no information. The description meaningfully adds context about the returned data (volume, liquidity, 72 exchanges, $1.6B+ daily).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides DEX volume and liquidity across 72 Solana exchanges, differentiating it from siblings like sol_bridge_flows or sol_network_stats.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives, nor any context about prerequisites or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_network_statsAInspect

Solana network health: TPS, current slot, epoch progress, block height, supply breakdown

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the burden. It indicates read-only metrics but does not clarify data freshness, rate limits, or authorization needs. This is adequate for a simple lookup tool but could be more informative.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently communicates the tool's output. No unnecessary words, and the key metrics are front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool with no output schema, the description provides a good overview of return values. However, it could mention whether data is real-time or cached, which would aid the agent in estimating staleness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, and schema coverage is 100%. The description lists the output fields, which adds context beyond the empty schema. Baseline 4 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly lists specific metrics (TPS, slot, epoch progress, block height, supply breakdown), clearly defining what the tool returns. It distinguishes from sibling tools which focus on specific domains like DeFi or DEX.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for general network health stats, but lacks explicit guidance on when to use or avoid this tool, and does not reference alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_overviewAInspect

Comprehensive Solana ecosystem overview: SOL price, TVL ($6.6B), supply, TPS, epoch, protocol count

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It lists the data returned (price, TVL, supply, TPS, epoch, protocol count), which provides some transparency. However, it fails to mention if the tool is read-only, data freshness, or any rate limits, leaving gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the key information: comprehensive ecosystem overview and specific metrics. Every word adds value, and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, no output schema), the description is mostly complete, listing the included metrics. However, it does not specify the return format (e.g., JSON structure), which would be helpful. Still, for an overview tool, this is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters, and 100% schema description coverage. The description adds meaning by listing the metrics included in the overview, which is sufficient since no parameters are needed. Baseline for 0 params is 4, but the description effectively communicates what the tool outputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides a comprehensive Solana ecosystem overview, listing specific metrics like SOL price, TVL, supply, TPS, epoch, and protocol count. This verb+resource combination effectively distinguishes it from sibling tools that focus on more specific aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for getting a broad overview of the Solana ecosystem, but it does not explicitly state when to use this tool versus alternatives like sol_network_stats or sol_protocol_list. No exclusions or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_protocol_healthBInspect

DeFi protocol health: TVL, audit status, risk grade. Supports Jupiter, Raydium, Jito, Orca, Drift, Kamino, etc.

ParametersJSON Schema
NameRequiredDescriptionDefault
protocolYesProtocol name (e.g., jupiter, raydium, jito, orca, drift)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must fully disclose behavior; but it only lists output fields, omitting details like whether the tool is a read-only query, any side effects, rate limits, or error handling for invalid protocols.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: one covering outputs, the other listing supported protocols. No extraneous information; efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple tool: specifies inputs and outputs. However, missing details on return format, error handling, and whether the data is live or cached; still usable for an AI agent given low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear parameter description listing examples; the tool description adds a few more protocol names but does not significantly extend meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly indicates the tool retrieves health metrics (TVL, audit status, risk grade) for DeFi protocols and lists supported examples (Jupiter, Raydium, etc.), differentiating it from siblings that focus on yields, volume, or lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus related tools like sol_defi_yields or sol_protocol_list; lacks any mention of prerequisites, limitations, or alternative scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_protocol_listAInspect

All Solana DeFi protocols ranked by TVL

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states 'ranked by TVL' without disclosing behavioral traits like update frequency, data source, pagination, or scope (e.g., top N?). This is insufficient for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise and front-loaded with key information. However, it is very brief and could include more context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, no output schema, and no annotations, the description is minimally adequate. It explains the core function but lacks details on output format, data freshness, or any limitations. The agent may need to infer further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, so the schema coverage is 100%. The description does not need to compensate for parameter details. It accurately describes the tool's output without parameter ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all Solana DeFi protocols ranked by TVL, which is a specific verb+resource combination. It distinguishes from sibling tools like sol_defi_yields (top yields) and sol_dex_volume (volume stats).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when a list of protocols by TVL is needed, but does not provide explicit when-to-use or when-not-to-use guidance compared to siblings. No exclusions or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_stablecoin_riskBInspect

Stablecoin supply and risk on Solana (USDC, USDT, PYUSD, etc.)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It does not disclose whether the tool requires authentication, has rate limits, or what the return format is (e.g., current snapshot vs historical data). The term 'risk' is ambiguous without further behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is front-loaded with the key information. It is concise but could be more structured (e.g., stating 'Returns data on...'). No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no parameters, the description is the sole source of context. It does not specify what metrics are returned (e.g., total supply, risk scores, peg deviation). An agent cannot infer the exact output or how to interpret 'risk'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so schema coverage is 100% trivially. The description adds meaning by specifying the scope (Solana stablecoins: USDC, USDT, PYUSD) and topic (supply and risk), which compensates for the lack of parameters. Baseline for 0 params is 4, and the description meets it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool covers stablecoin supply and risk on Solana, listing specific stablecoins (USDC, USDT, PYUSD). This distinguishes it from sibling tools like sol_token_risk (broader token risk) and sol_overview (general metrics). The verb 'supply and risk' is vague but the resource is well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The sibling sol_token_risk might overlap but there is no contrast or exclusion. An agent has no context for choosing between them.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_token_riskAInspect

Risk score for Solana SPL tokens. Checks mint/freeze authority, CoinGecko listing, supply. Ideal for Pump.fun token screening.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesSPL token mint address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes what the tool checks (mint/freeze authority, listing, supply) but does not disclose return format, error handling, or rate limits. With no annotations, the description carries the full burden but is adequate for a simple query tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core purpose, no fluff. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema and does not clarify the risk score format (e.g., numeric range or categories). For a tool returning a score, this omission reduces completeness, though the description is otherwise sufficient for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description of the 'address' parameter. The description adds semantic context by explaining what checks are performed on the address, enhancing understanding beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it provides a risk score for Solana SPL tokens and mentions specific checks (mint/freeze authority, CoinGecko listing, supply). However, it does not explicitly distinguish from sibling tools like sol_stablecoin_risk or sol_defi_yields, leaving the unique differentiation only hinted by the Pump.fun use case.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Suggests when to use the tool with 'Ideal for Pump.fun token screening', providing clear context. But lacks explicit guidance on when not to use or alternatives, which would strengthen the dimension.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_validator_infoAInspect

Validator overview: active/delinquent counts, top 10 by stake, concentration analysis

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description fails to disclose behavioral traits such as data freshness, permission requirements, or rate limits. The agent must infer it is a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose and provided data. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description is adequate for a simple overview tool, but it lacks details on data source, update frequency, or any edge cases, leaving some ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has no parameters, so the schema provides no details. The description adds meaning by summarizing the output contents (counts, top 10, concentration), which is valuable for an agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies a clear verb 'overview' and lists concrete metrics (active/delinquent counts, top 10 by stake, concentration analysis), distinguishing it from sibling tools like sol_network_stats or sol_wallet_analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It lacks explicit context or exclusions, relying solely on the purpose implied by the description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_wallet_analysisAInspect

Analyze a Solana wallet: SOL balance, USD value, SPL token holdings

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesSolana wallet public key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of transparency. It discloses that the tool returns SOL balance, USD value, and SPL token holdings, which is useful. However, it does not reveal potential limitations, such as whether the data is real-time, historical, or cached, or what happens on invalid addresses. This is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise, front-loaded with the core purpose, and contains no extraneous information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema, no annotations), the description is fairly complete. It specifies the inputs and outputs. However, it lacks any mention of output format (e.g., JSON), which would be minor but helpful. Overall, it suffices for an agent to understand the tool's basic functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the only parameter 'address' is described as 'Solana wallet public key'). The tool description adds no further meaning beyond what the schema already provides. Per guidelines, baseline 3 is appropriate when schema covers all parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze a Solana wallet: SOL balance, USD value, SPL token holdings'. It uses a specific verb ('Analyze') and resource ('a Solana wallet'), and lists the outputs. This distinguishes it from sibling tools like sol_bridge_flows or sol_defi_yields, which focus on different aspects of the Solana ecosystem.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. It simply states what the tool does, leaving the agent to infer usage context without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sol_whale_watchBInspect

Monitor recent transactions for any Solana address

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesSolana address to monitor
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not disclose important behavioral traits such as read-only nature, authentication needs, or the time window for 'recent'. With no annotations, the description bears full responsibility but fails to provide these details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence with no wasted words. Efficient, though could include more details without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool, the description is too minimal. It lacks information about return format, types of transactions, or pagination, which are needed for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (address parameter described). The description adds no new meaning beyond what the schema provides, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'monitor' and the resource 'recent transactions for any Solana address'. It distinguishes from sibling tools like 'sol_wallet_analysis' or 'sol_bridge_flows' by focusing on transaction monitoring.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'sol_wallet_analysis'. There is no mention of prerequisites, use cases, or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.