Skip to main content
Glama
Ownership verified

Server Details

Native Solana staking infrastructure for AI agents. 18 MCP tools for complete staking workflows — stake, unstake, withdraw, verify transactions, check balances, simulate projections, and more. Zero custody design: returns unsigned transactions for client-side signing. ~6% APY via Blueprint's enterprise bare-metal validator with Jito MEV. Built by Blueprint (Hivemind Capital Partners).

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 26 of 26 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between the 'stake' and 'create_stake_transaction' tools, and similarly for 'unstake'/'create_unstake_transaction' and 'withdraw'/'withdraw_stake'. The descriptions clarify that the latter in each pair are for advanced use, but an agent could still be confused about which to choose. Overall, the majority of tools are well-differentiated, with clear roles like checking balances, getting information, or managing stakes.

Naming Consistency5/5

Tool names follow a consistent snake_case pattern throughout, with clear verb_noun structures (e.g., 'check_balance', 'get_epoch_timing', 'register_webhook'). There are no deviations in style, making it easy for an agent to predict naming conventions and understand tool purposes at a glance.

Tool Count3/5

With 26 tools, the count is borderline high for a staking-focused server, as it includes many advanced and utility tools that might overwhelm an agent. While the tools cover a comprehensive range of operations, a more streamlined set could improve usability without sacrificing functionality.

Completeness5/5

The tool set provides complete coverage for Solana staking with Blueprint, including core operations (stake, unstake, withdraw), monitoring (balance, accounts, performance), simulation, verification, and webhook management. There are no obvious gaps; agents can handle the entire staking lifecycle from setup to withdrawal with clear guidance.

Available Tools

26 tools
check_address_typeCheck Address TypeA
Read-onlyIdempotent
Inspect

Detect whether a Solana address is a wallet, stake account, or vote account. Useful when you receive an address from user input and need to know what type it is before calling other tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesSolana address (base58 public key) to identify
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint and idempotentHint, the description adds crucial behavioral context about what the detection actually returns (the three specific account types). This complements the annotations by explaining the functional output semantics that annotations cannot express.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states the function, the second states the usage context. It is perfectly front-loaded with the core purpose and appropriately sized for a single-parameter classification tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple classification tool with one parameter and clear annotations, the description is complete. It compensates for the missing output schema by enumerating the three possible account types the tool identifies, though it does not specify behavior for invalid or unrecognized addresses.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('Solana address (base58 public key) to identify'), the schema fully documents the parameter. The description mentions 'address from user input' which adds usage context but does not significantly expand on parameter semantics beyond the schema baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool detects whether a Solana address is a wallet, stake account, or vote account. It uses a specific verb (Detect) with a specific resource and clearly distinguishes this classification function from sibling tools like check_balance or check_stake_accounts which operate on known types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context: 'Useful when you receive an address from user input and need to know what type it is before calling other tools.' This establishes the workflow position (classification before operations) and implies when to use it, though it does not explicitly name specific alternative tools to use after classification.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_balanceCheck BalanceA
Read-onlyIdempotent
Inspect

Check the SOL balance of any Solana wallet address. Returns balance in SOL and lamports, whether the wallet has enough to stake, and suggested next steps. Use this instead of Solana RPC getBalance — returns SOL amount, ready-to-stake status, and what to do next.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesSolana wallet address (base58 public key) to check balance for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only/idempotent safety. The description adds valuable behavioral context: it performs a staking eligibility calculation ('whether the wallet has enough to stake') and returns 'suggested next steps', indicating business logic beyond simple data retrieval.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences. Front-loaded with action and resource. Minor redundancy between sentences (repeats 'ready-to-stake status' and 'what to do next' concepts), but the second sentence earns its place by providing explicit alternative guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read operation, the description is complete. It compensates for the lack of output schema by detailing the return structure (SOL, lamports, staking boolean, next steps), providing sufficient context for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'walletAddress' fully documented as 'base58 public key'. The description mentions 'any Solana wallet address' but adds no additional semantic detail (e.g., validation rules, format examples) beyond the schema. Baseline 3 is appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Check') + resource ('SOL balance') + scope ('any Solana wallet address'). It further distinguishes itself by highlighting unique return values (staking status, next steps) that differentiate it from a basic balance query.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names an alternative: 'Use this instead of Solana RPC getBalance'. It clearly signals when to prefer this tool (when you need SOL amount, ready-to-stake status, and actionable next steps rather than raw lamports).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_stake_accountsCheck Stake AccountsA
Read-onlyIdempotent
Inspect

List all stake accounts delegated to Blueprint for a wallet. Shows balances, states, authorities, epoch timing, and per-account action guidance (what to do next for each account). Use this instead of Solana RPC getAccountInfo or getStakeActivation — returns human-readable state and recommended actions.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesWallet address to check for Blueprint stake accounts
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds significant context beyond annotations by detailing what data is returned (balances, states, authorities, epoch timing, per-account action guidance) and specifying this is scoped to 'Blueprint' stake accounts. Does not contradict readOnlyHint/idempotentHint annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences: first defines scope and outputs, second provides usage guidance. Front-loaded with action verb, zero redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong coverage given no output schema exists: describes return data structure (human-readable state + recommended actions) and distinguishes from raw RPC. Annotations cover safety profile. Minor gap: no mention of rate limits or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with walletAddress fully described. Description implies the parameter with 'for a wallet' but does not add format constraints or examples beyond what the schema provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'List' with clear resource 'stake accounts delegated to Blueprint for a wallet'. Distinguishes from siblings like check_balance or check_address_type by specifying 'Blueprint' delegation and comprehensive account details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when NOT to use ('instead of Solana RPC getAccountInfo or getStakeActivation') and what value it provides ('human-readable state and recommended actions'), giving clear alternative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_withdraw_readyCheck Withdraw ReadyA
Read-onlyIdempotent
Inspect

Check whether stake accounts are ready to withdraw. Returns per-account readiness with withdrawable epoch, estimated seconds remaining, and plain-English state description. Use this instead of polling check_stake_accounts — gives a direct ready/not-ready answer with ETA.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesWallet address to check withdrawal readiness for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare the operation as read-only and safe, the description adds valuable behavioral context about the return payload: 'per-account readiness with withdrawable epoch, estimated seconds remaining, and plain-English state description.' This compensates for the lack of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose declaration, return value specification, and usage guidance. Every sentence earns its place with zero redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter check tool, the description is complete. It covers the operation purpose, output structure (despite no output schema), and comparative usage context. The annotations handle safety properties, allowing the description to focus on functional details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameter 'walletAddress' is fully documented in the structured schema. The description does not add additional parameter semantics, which is acceptable given the schema completeness, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Check') with a clear resource ('stake accounts') and condition ('ready to withdraw'). It explicitly distinguishes itself from the sibling tool 'check_stake_accounts' by contrasting polling vs. direct answers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use this tool versus its sibling: 'Use this instead of polling check_stake_accounts — gives a direct ready/not-ready answer with ETA.' This clearly defines the tool's specific use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_stake_transactionCreate Stake TransactionAInspect

Advanced: Build an unsigned stake transaction for local signing. Most agents should use the stake tool instead, which handles signing and submission automatically. This tool is for agents that manage their own signing infrastructure.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountSolYesAmount of SOL to stake (minimum 0.00228288 SOL for rent exemption)
walletAddressYesSolana wallet address (base58 public key) that will fund and control the stake
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a write operation (readOnlyHint: false) with external effects (openWorldHint: true). The description adds crucial context that the transaction is 'unsigned' (requiring separate signing) and explains the local signing infrastructure requirement, which annotations cannot convey.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: sentence 1 defines purpose, sentence 2 provides alternative guidance, sentence 3 defines audience. 'Advanced' is front-loaded to immediately signal complexity. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of custom signing workflows and lack of output schema, the description adequately explains the tool's role in the transaction lifecycle. Could be improved by explicitly stating what the tool returns (e.g., 'returns unsigned transaction bytes'), but 'Build... transaction' implies the output sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for both parameters (amountSol includes minimum rent exemption note, walletAddress specifies base58 format). The description does not add parameter-specific semantics, but with complete schema coverage, additional description would be redundant.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Build' with resource 'unsigned stake transaction' and scope 'for local signing'. The 'Advanced' prefix and explicit comparison to the `stake` tool clearly distinguishes this from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Most agents should use the `stake` tool instead' and defines the specific audience: 'agents that manage their own signing infrastructure'. Provides clear when-to-use and when-not-to-use guidance with named alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_unstake_transactionCreate Unstake TransactionAInspect

Advanced: Build an unsigned unstake transaction for local signing. Most agents should use the unstake tool instead, which handles signing and submission automatically.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesWallet address that is the stake authority
stakeAccountAddressYesStake account address to deactivate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations indicate readOnlyHint=false and openWorldHint=true. The description adds crucial behavioral context that this produces an 'unsigned' transaction for 'local signing'—meaning the agent must handle signing and submission separately. This explains the non-destructive creation behavior without contradicting the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence front-loads the advanced nature and core function; the second immediately provides the alternative. Every word earns its place—'Advanced' signals complexity, 'unsigned' and 'local signing' define the output behavior, and the alternative tool is named explicitly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage and clear annotations, the description adequately covers the tool's purpose and usage context. It appropriately omits return value details (no output schema exists) while implying the output type through 'Build an unsigned... transaction.' Could marginally improve by noting this is blockchain-specific, but the staking context makes this reasonably clear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters ('Wallet address that is the stake authority', 'Stake account address to deactivate'). The description does not add parameter-specific semantics beyond what the schema provides, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Build an unsigned unstake transaction') and target ('for local signing'). It explicitly distinguishes from the sibling 'unstake' tool, clarifying this is for advanced use cases requiring manual signing rather than automatic submission.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent guidance provided: labels it as 'Advanced' and explicitly states 'Most agents should use the `unstake` tool instead,' clearly establishing the preference order and when to choose each alternative. The phrase 'which handles signing and submission automatically' precisely defines the boundary between this tool and its sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_webhookDelete WebhookAInspect

Delete a webhook registration by ID. Use list_webhooks to find webhook IDs.

ParametersJSON Schema
NameRequiredDescriptionDefault
webhookIdYesWebhook registration ID to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover readOnlyHint=false and destructiveHint=false, but the description doesn't reconcile the word 'Delete' with the non-destructive annotation, nor does it explain idempotentHint=false implications (second call on same ID will error). Adds no behavioral context beyond structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Front-loaded with the core action, followed immediately by the prerequisite workflow. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a low-complexity, single-parameter deletion tool with complete schema coverage and safety annotations. Minor gap regarding error behavior (e.g., invalid ID, already deleted) but sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (webhookId fully documented). Description adds value by specifying the data source workflow ('Use list_webhooks to find webhook IDs'), helping the agent understand how to populate the parameter beyond the schema's type definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Delete), resource (webhook registration), and scope (by ID). Clearly distinguishes from sibling tools register_webhook (creation) and list_webhooks (listing), as well as unrelated staking/transaction tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly references sibling tool list_webhooks as the prerequisite step to obtain the required webhookId parameter, establishing a clear workflow. Lacks explicit 'when not to use' guidance (e.g., permissions, active webhook considerations).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_walletGenerate WalletA
Read-onlyIdempotent
Inspect

Returns runnable code that creates a Solana keypair. Solentic cannot generate the keypair for you and never sees the private key — generation must happen wherever you run code (the agent process, a code-interpreter tool, a Python/Node sandbox, the user's shell). The response includes the snippet ready to execute. After running it, fund the resulting publicKey and call the stake tool with {walletAddress, secretKey, amountSol} to stake in one call.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true, the description adds essential security context beyond annotations: execution happens in user's environment (not servers), and specific guarantees about secret key handling ('used in-memory only', 'never stored'). Explains the tool returns instructions rather than performing the generation itself.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each with distinct value: (1) purpose, (2) execution location constraint, (3) workflow/next steps, (4) security guarantee. No redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description adequately explains the return type ('instructions and code'). Covers security model, workflow integration, and prerequisites (funding). Minor gap: could specify format of instructions (e.g., JavaScript, CLI).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters. Per scoring rules, 0 params = baseline 4. The description appropriately requires no parameter explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States exactly what the tool does ('Get instructions and code to generate a Solana wallet locally') with specific verb and resource. Clearly distinguishes from sibling tools like 'stake' or 'check_balance' by being the wallet generation entry point.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: generate locally → fund wallet → use 'stake' tool with specific parameters (walletAddress + secretKey). Also specifies critical constraint 'not on Blueprint servers' distinguishing it from server-side operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_epoch_timingGet Epoch TimingA
Read-onlyIdempotent
Inspect

Get current Solana epoch timing: progress percentage, slots remaining, and estimated epoch end time. Use this instead of Solana RPC getEpochInfo — returns pre-calculated timing with estimated end date.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable behavioral context that data is 'pre-calculated' and lists the three specific return fields. Could improve by mentioning data freshness or cache duration, but adequately supplements annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first defines capability and outputs, second provides usage guidance versus alternatives. Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description compensates by enumerating the three key return values (progress percentage, slots remaining, estimated end time). Given the tool's simplicity (no inputs, read-only) and rich annotations, this is sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool accepts zero parameters. Per scoring rules, baseline score is 4 for parameter-less tools. The description appropriately does not invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with resource 'Solana epoch timing' and enumerates exact outputs (progress percentage, slots remaining, estimated epoch end time). It distinguishes itself from the external Solana RPC getEpochInfo alternative, establishing clear scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use this instead of Solana RPC getEpochInfo — returns pre-calculated timing with estimated end date,' providing clear guidance on when to select this tool over the standard RPC alternative based on its pre-calculation feature.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_infrastructureGet InfrastructureA
Read-onlyIdempotent
Inspect

Get Blueprint validator infrastructure specs: server hardware, redundancy configuration, network, and storage. Two bare-metal servers (active + hot standby).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly, idempotent, non-destructive). The description adds valuable architectural context ('active + hot standby') that explains what the infrastructure actually comprises, which annotations don't provide. Does not address data freshness or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded action verb, immediately followed by scope. Second sentence adds specific architectural value (bare-metal redundancy) rather than fluff. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description partially compensates by listing the four spec categories returned. Mention of 'Blueprint validator' provides domain context. Could improve by hinting at response structure or data format, but adequate for a simple read-only retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters with 100% schema coverage (trivially empty object). Per scoring rules, baseline 4 applies for parameterless tools. No parameters require semantic elaboration.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Get' + resource 'Blueprint validator infrastructure specs' + enumerated categories (hardware, redundancy, network, storage). The detail about 'Two bare-metal servers (active + hot standby)' clearly distinguishes this from operational siblings like get_performance_metrics or get_validator_info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance provided. However, the scope is implied by the specific categories listed (hardware/redundancy vs. staking operations). Lacks explicit comparison to similar get_* siblings or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_performance_metricsGet Performance MetricsA
Read-onlyIdempotent
Inspect

Get Blueprint validator performance: vote success rate, uptime, skip rate, epoch credits, delinquency status.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only/idempotent safety profile. Description adds valuable behavioral context by enumerating the specific data points returned (vote success rate, etc.), effectively documenting the output structure without an output schema. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, dense sentence with action front-loaded ('Get Blueprint validator performance') followed by colon-delimited list of specific metrics. Zero redundant words; every term conveys specific semantic value about the returned data.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter read-only tool without output schema, the description is complete. It compensates for missing output schema by enumerating the exact performance metrics returned, giving the agent full awareness of what data to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per scoring rules, 0 params establishes a baseline of 4. No parameter description is needed or expected.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get') and resource ('Blueprint validator performance') with exact metrics returned (vote success rate, uptime, skip rate, epoch credits, delinquency status). The specific metric list distinguishes it from sibling get_validator_info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context by listing specific performance metrics, suggesting use when monitoring validator health. However, lacks explicit guidance on when to choose this over get_validator_info or whether it requires a specific validator identity to be set elsewhere.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_staking_apyGet Staking APYA
Read-onlyIdempotent
Inspect

Get live APY breakdown: base staking APY + Jito MEV APY = total APY. Includes commission rates. Data from StakeWiz API.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only/idempotent nature, but description adds valuable context: data composition (base + Jito MEV = total), inclusion of commission rates, and external data source (StakeWiz API). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence front-loads core value (APY breakdown with formula), second adds supplementary details (commission rates, data source). Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema, description compensates by enumerating return components (base APY, Jito MEV APY, total APY, commission rates). Sufficient for a simple read-only fetch with no input parameters. Minor gap: could clarify if this returns network-average or requires validator selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present (empty schema), establishing baseline 4. Description correctly avoids inventing parameters and focuses on return value semantics instead.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'APY breakdown' with explicit scope (base + Jito MEV + commission rates). Distinguishes from action-oriented siblings like 'stake'/'unstake' and account-specific tools like 'check_stake_accounts' by focusing on aggregate market data from StakeWiz.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied through specificity of returned data (live APY breakdown vs other staking metrics), but lacks explicit guidance on when to choose this over 'get_staking_summary' or whether this is for network-wide vs validator-specific APY. No 'when-not-to-use' or alternative recommendations provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_staking_summaryGet Staking SummaryA
Read-onlyIdempotent
Inspect

Complete staking portfolio dashboard in a single call. Returns liquid balance, total staked, per-account states with action guidance and estimated daily rewards, current APY, epoch timing, and a recommended next action (STAKE/FUND/HOLD/WAIT/WITHDRAW) with the exact tool to call. Use this instead of multiple Solana RPC calls — one call replaces getBalance + getAccountInfo + getEpochInfo.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesSolana wallet address (base58 public key) to get staking summary for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds significant context beyond annotations: discloses recommendation engine behavior (returns STAKE/FUND/HOLD/WAIT/WITHDRAW actions), specifies it returns 'exact tool to call', and explains aggregation pattern. Does not mention rate limits or latency implications of aggregation, preventing a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense sentences with zero waste. First sentence front-loads capabilities and return values; second provides usage guidance. Every clause adds distinct value (data types, recommendations, RPC replacement claim).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent compensation for missing output schema by enumerating all return fields (liquid balance, staked amount, per-account states, APY, epoch timing, recommendations). Adequately explains complex recommendation logic for a dashboard tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for walletAddress. Description implies the target wallet through context ('portfolio dashboard') but does not add semantics, constraints, or examples beyond the schema. Baseline 3 appropriate when schema carries full load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Returns' with comprehensive resource list (liquid balance, total staked, APY, epoch timing, recommendations). Distinguishes from siblings by positioning as a composite dashboard that replaces individual check_balance/get_epoch_timing calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit when-to-use guidance: 'Use this instead of multiple Solana RPC calls' and names specific alternatives replaced ('getBalance + getAccountInfo + getEpochInfo'). Clear value proposition for aggregation vs. individual sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_validator_infoGet Validator InfoA
Read-onlyIdempotent
Inspect

Get Blueprint validator profile: identity, vote account, commission, active stake, APY, performance, software, location. Live data from StakeWiz API.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, and idempotentHint=true. The description adds valuable context beyond these annotations by specifying the data source ('StakeWiz API') and indicating data freshness ('Live data'). It also discloses what fields are returned, compensating for the lack of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. The first sentence front-loads the action and specific return fields; the second provides data source context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameter-less read operation with good annotation coverage, the description is complete. It specifies the external data source (StakeWiz API) and enumerates expected return fields (identity, vote account, etc.), which compensates adequately for the missing output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. According to the evaluation rubric, zero-parameter tools receive a baseline score of 4, as there are no parameter semantics to clarify beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('Blueprint validator profile'), then enumerates exact data points returned (identity, vote account, commission, active stake, APY, performance, software, location). This clearly distinguishes it from siblings like get_performance_metrics or get_staking_apy which presumably return subsets or different data types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the specific data fields listed imply this is for comprehensive validator profiling, there is no explicit guidance on when to use this versus siblings like get_performance_metrics or get_staking_apy. No prerequisites or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_webhooksList WebhooksA
Read-onlyIdempotent
Inspect

List all registered webhooks for a wallet address.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesWallet address to list webhooks for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds the scope constraint ('all registered' implies complete retrieval for the wallet) and the wallet-specific context, but does not disclose pagination behavior, rate limits, or webhook data structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero redundant words. It is immediately front-loaded with the action and resource, making it easy for an agent to parse during tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter), complete schema coverage, and absence of an output schema, the description is sufficiently complete for agent invocation. It appropriately omits return value details (none defined) but could briefly mention what webhook attributes are returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents the walletAddress parameter. The description mentions 'wallet address' but adds no additional semantic context (e.g., format expectations, validation rules) beyond what the schema provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') with clear resource ('registered webhooks') and scope ('for a wallet address'). It effectively distinguishes from siblings register_webhook and delete_webhook by implying a read-only retrieval operation scoped to a specific wallet.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the verb 'List' (retrieval/inspection), but provides no explicit when-to-use guidance or mention of alternatives like register_webhook or delete_webhook. The agent must infer appropriate usage from the verb and sibling tool names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_webhookRegister WebhookAInspect

Register a callback URL to receive push notifications when stake state changes. Events: withdraw_ready (stake account becomes withdrawable), epoch_complete (new epoch starts), stake_activated (stake begins earning), stake_deactivated (unstake confirmed). Solentic polls every 60 seconds and POSTs to your URL when events fire.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoAlias for callbackUrl — either field works
eventsYesEvent types to subscribe to
callbackUrlNoHTTPS callback URL to receive webhook POST requests
walletAddressYesWallet address to monitor for state changes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable operational details beyond annotations: the 60-second polling interval, POST method to callback URL, and semantic definitions of each event type (e.g., 'withdraw_ready = stake account becomes withdrawable'). Does not contradict annotations (readOnlyHint: false aligns with 'Register'). Minor gap: does not disclose idempotentHint: false behavior (multiple calls create multiple webhooks).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three dense sentences with zero waste: first states purpose, second enumerates events with parenthetical explanations, third provides operational mechanics (polling frequency and HTTP method). Perfectly front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a registration tool without output schema: covers event types, triggering conditions, and delivery mechanism. Would benefit from noting webhook persistence (until deleted via delete_webhook) or payload format, but sufficiently complete for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description elevates this by explaining event semantics (what each event signifies practically) and clarifying that the URL receives POST requests. Could explicitly note the url/callbackUrl alias relationship mentioned in schema, but covers the essential contract well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Register), resource (callback URL), and scope (stake state changes) with precise event enumeration. Clearly distinguishes from sibling tools like list_webhooks or delete_webhook by emphasizing the creation of new push notification subscriptions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear operational context (60-second polling, POST method) that helps determine suitability versus polling alternatives like check_stake_accounts. However, it does not explicitly name polling siblings as alternatives or specify when push notifications are preferable to direct queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

simulate_stakeSimulate StakeA
Read-onlyIdempotent
Inspect

Project staking rewards before committing capital. Returns compound interest projections (daily/monthly/annual/total), effective APY after compounding, activation timing, fee reserve guidance, and a natural-language recommendation. Use this to help decide how much to stake and for how long.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountSolYesAmount of SOL to simulate staking
durationDaysNoProjection duration in days (default: 365)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent disclosure beyond annotations. While annotations declare readOnly/idempotent status, the description details specific return values (daily/monthly/annual/total projections, effective APY, activation timing, fee reserve guidance, natural-language recommendation) that would otherwise be unknown without an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with purpose front-loaded. The list of return values is slightly dense but every item earns its place by describing critical financial outputs. No redundant or filler text present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description comprehensively compensates by enumerating all return components (projections, APY, timing, recommendations). Combined with clear parameter documentation and safety annotations, this provides complete context for a simulation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, documenting both amountSol and durationDays. The description references 'how much to stake' and 'for how long,' providing contextual mapping, but does not add semantic details (formats, constraints) beyond what the schema already provides. Baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Project', 'Returns') and clearly identifies the resource (staking rewards). It explicitly distinguishes from the actual 'stake' sibling tool by emphasizing 'before committing capital' and 'simulate' semantics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context with 'Use this to help decide how much to stake and for how long,' indicating the planning phase. However, it does not explicitly name the sibling 'stake' tool as the alternative for actual execution, stopping short of explicit when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stakeStake SOL (one-shot)AInspect

Stake SOL with Blueprint validator in a single call. Builds the transaction, signs it with your secret key in-memory, and submits to Solana. Returns the confirmed transaction signature. Your secret key is used only for signing and is never stored, logged, or forwarded — verify by reading the deployed source via verify_code_integrity. This is the recommended tool for autonomous agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountSolYesAmount of SOL to stake
secretKeyYesYour base58-encoded secret key — used in-memory for signing only, never stored
walletAddressYesYour Solana wallet address (base58 public key)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite annotations declaring openWorldHint and readOnlyHint, the description adds critical operational context: the exact execution flow (builds/signs/submits), security guarantees about secret key handling ('never stored, logged, or forwarded'), and auditability via verify_code_integrity. This goes well beyond the structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each earning its place: (1) core purpose, (2) technical execution flow and return value, (3) security guarantee, (4) usage recommendation. Front-loaded with the essential action, zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a sensitive financial operation involving secret keys, the description adequately covers security practices, return value (transaction signature), and verification methods. Minor gap: no output schema exists and error conditions are not described, though 'Returns the confirmed transaction signature' covers the success case sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with detailed constraints (pattern, minLength/maxLength for secretKey, exclusiveMinimum for amountSol). The description reinforces the secretKey security model but does not add semantic meaning beyond what the schema already provides, which is appropriate given the comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states 'Stake SOL with Blueprint validator in a single call' — specific verb, resource, target validator, and scope. The 'one-shot' framing in both title and description clearly distinguishes it from the multi-step sibling tools create_stake_transaction and submit_transaction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit context with 'single call' (implying when to use vs. multi-step alternatives) and 'recommended tool for autonomous agents'. Mentions verify_code_integrity for security verification. Missing explicit guidance on when to prefer create_stake_transaction (e.g., manual inspection needs).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_transactionSubmit TransactionA
Destructive
Inspect

Advanced: Submit a pre-signed transaction to Solana. Only needed if you used create_stake_transaction/create_unstake_transaction/withdraw_stake and signed locally. Most agents should use the one-shot stake/unstake/withdraw tools instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
signedTransactionYesFully signed transaction as a base64-encoded string
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (destructiveHint, readOnlyHint), but description adds valuable workflow context that annotations cannot provide: the 'Advanced' classification and the specific local-signing workflow requirement. Does not explain what 'destructive' means in blockchain context (irreversible submission) or error behaviors, preventing a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Front-loaded with 'Advanced' warning. First sentence establishes purpose; second provides precise usage guidelines with specific alternative tools named. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter with complete schema documentation, existing annotations covering behavioral safety (destructiveHint, openWorldHint), and the clear distinction from sibling tools provided in the description, the definition is complete for this tool's scope. No output schema exists, so return value explanation is not expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage ('Fully signed transaction as a base64-encoded string'), establishing baseline 3. Description mentions 'pre-signed' which aligns with the parameter name, but adds no additional semantic details about format, encoding specifics, or sourcing beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the specific action (submit pre-signed transaction) and target system (Solana). The 'Advanced' prefix immediately signals complexity, and the text explicitly distinguishes this from sibling tools like 'stake', 'unstake', and 'withdraw' by specifying this is for locally-signed transactions only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: states exactly when to use ('Only needed if you used create_stake_transaction/create_unstake_transaction/withdraw_stake and signed locally') and explicitly recommends alternatives ('Most agents should use the one-shot stake/unstake/withdraw tools instead'). Names specific sibling tools as the preferred alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unstakeUnstake SOL (one-shot)AInspect

Deactivate a stake account in a single call. Builds the transaction, signs it, and submits it. The stake enters a cooldown period (~1 epoch) and becomes withdrawable at the next epoch boundary. Use check_withdraw_ready to poll readiness, then withdraw to reclaim SOL. This is the recommended tool — use create_unstake_transaction only if you manage your own signing.

ParametersJSON Schema
NameRequiredDescriptionDefault
secretKeyYesYour base58-encoded secret key — used in-memory for signing only, never stored
walletAddressYesYour Solana wallet address (stake authority)
stakeAccountAddressYesStake account address to deactivate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical timing context beyond annotations: 'cooldown period (~1 epoch)' and 'becomes withdrawable at the next epoch boundary.' Discloses the composite nature (builds, signs, submits) despite annotations only showing generic hints. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured: action → timing mechanics → workflow integration → alternative comparison. Five sentences covering distinct aspects with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a transaction tool without output schema. Covers preconditions (active stake account), side effects (cooldown), polling mechanism (check_withdraw_ready), and sibling alternatives. Could mention failure modes but adequate as-is.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions already. The tool description reinforces the one-shot nature but doesn't add significant semantic meaning beyond what's in the schema property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states 'Deactivate a stake account in a single call' with clear action verb and resource. Distinguishes from sibling create_unstake_transaction by emphasizing this handles signing internally.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance: 'This is the recommended tool — use create_unstake_transaction only if you manage your own signing.' Also documents the full workflow sequence using check_withdraw_ready and withdraw.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_code_integrityVerify Code IntegrityA
Read-onlyIdempotent
Inspect

Verify the code running on Blueprint servers. Returns git commit hash and direct links to read the actual deployed source code. Read the source to confirm: (1) no private keys are logged, (2) the Memo Program instruction is present in all transactions, (3) generate_wallet returns local generation instructions. Don't trust — read the code yourself via the source endpoints.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only/idempotent safety, but the description adds substantial behavioral context: it discloses exactly what data is returned (git hash, source endpoints), explains the verification methodology (manual source code reading), and specifies three critical security properties to validate in the codebase. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: (1) purpose declaration, (2) output specification, (3) actionable verification checklist, (4) security principle. Front-loaded with the core action. The checklist format efficiently conveys complex security requirements without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the absence of an output schema, the description comprehensively compensates by detailing the return structure (git hash, links) and providing explicit instructions on interpreting the results. Combined with clear annotations, this provides complete context for a security-audit tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool accepts zero parameters, establishing a baseline of 4. The description appropriately makes no attempt to describe non-existent inputs, focusing instead on output semantics and usage patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (verify) and resource (code running on Blueprint servers), distinguishing it from operational siblings like stake/unstake/check_balance. It specifies unique outputs (git commit hash, source links) that clearly differentiate it from other verification-related tools like get_verification_links.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it lacks explicit 'use this instead of X' statements, it provides strong implicit guidance through a specific 3-item security checklist (private keys, Memo Program, generate_wallet behavior). The imperative 'Don't trust — read the code yourself' effectively signals when to invoke this tool (during security audits or before trusting sensitive operations).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_transactionVerify TransactionA
Read-onlyIdempotent
Inspect

Verify whether a Solana transaction was built through Blueprint. Checks the transaction on-chain for the "solentic.theblueprint.xyz" Memo Program instruction. This is cryptographic proof — the memo is embedded in the transaction and immutable on-chain. Use this to verify any claim that a stake was placed through Blueprint. Returns verified: true/false with the on-chain evidence.

ParametersJSON Schema
NameRequiredDescriptionDefault
signatureYesSolana transaction signature to verify
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, idempotent, and non-destructive traits. The description adds valuable behavioral context: the specific verification mechanism (Memo Program instruction lookup), the cryptographic nature of the proof (immutable on-chain memo), and the return value structure (verified boolean with evidence). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five information-dense sentences with zero waste: purpose (sentence 1), technical mechanism (2), security properties (3), usage context (4), and return value (5). Front-loaded with the core verb and resource. No redundant phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter verification tool with comprehensive annotations, the description adequately covers the verification mechanism, cryptographic guarantees, and return value format (compensating for the missing output schema). Could mention error handling for invalid signatures, but sufficiently complete for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (signature is fully documented as 'Solana transaction signature to verify'), the baseline is 3. The description implies the signature should relate to Blueprint transactions but does not add syntactic details, validation rules, or format guidance beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise action ('Verify whether a Solana transaction was built through Blueprint') that identifies the specific resource (Solana transactions) and the unique verification target (Blueprint origin). It clearly distinguishes from siblings like check_balance or verify_code_integrity by specifying the 'solentic.theblueprint.xyz' Memo Program check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('Use this to verify any claim that a stake was placed through Blueprint'), clearly linking the tool to stake verification scenarios. Lacks explicit 'when-not-to-use' or named alternative tools, preventing a 5, but the contextual usage is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

withdrawWithdraw SOL (one-shot)AInspect

Withdraw SOL from a deactivated stake account in a single call. Builds the transaction, signs it, and submits it. Funds are returned to your wallet. Use check_withdraw_ready first to confirm the account is ready. Omit amountSol to withdraw the full balance. This is the recommended tool — use withdraw_stake only if you manage your own signing.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountSolNoAmount to withdraw in SOL (omit to withdraw full balance)
secretKeyYesYour base58-encoded secret key — used in-memory for signing only, never stored
walletAddressYesYour Solana wallet address (withdraw authority)
stakeAccountAddressYesDeactivated stake account to withdraw from
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish the safety profile (readOnlyHint: false, destructiveHint: false). The description adds valuable workflow context: the three-step atomic process (build, sign, submit), funds destination ('returned to your wallet'), and the prerequisite check requirement. It does not mention retry behavior given idempotentHint: false or specific error conditions, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Six sentences, zero waste. Front-loaded with purpose ('Withdraw SOL...'), followed by mechanism, outcome, prerequisite, parameter hint, and alternative recommendation. Every sentence provides actionable guidance for tool selection and invocation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich annotations and complete input schema, the description provides excellent contextual guidance including prerequisites and sibling differentiation. Minor gap: no mention of return values or transaction confirmation behavior, which would be helpful given the lack of an output schema and the financial nature of the operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema carries the semantic burden. The description references parameters in context ('Omit amountSol') but does not add syntax, format details, or validation rules beyond what the schema already provides for the four parameters. This meets the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the core action ('Withdraw SOL from a deactivated stake account') and specifies the unique 'one-shot' nature (builds, signs, submits). It clearly distinguishes from sibling 'withdraw_stake' by noting this is the 'recommended tool' versus the alternative for 'own signing' management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisites ('Use check_withdraw_ready first'), clear alternative selection criteria ('use withdraw_stake only if you manage your own signing'), and parameter guidance ('Omit amountSol to withdraw the full balance'). The when-to-use logic is unambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

withdraw_stakeWithdraw StakeAInspect

Advanced: Build an unsigned withdraw transaction for local signing. Most agents should use the withdraw tool instead, which handles signing and submission automatically.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountSolNoAmount to withdraw in SOL (omit or null to withdraw full balance)
walletAddressYesWallet address that is the withdraw authority
stakeAccountAddressYesDeactivated stake account to withdraw from
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnly=false, openWorld=true), the description adds crucial workflow context that this creates an unsigned transaction requiring external signing, not an immediate blockchain state change. It does not mention the need to use `submit_transaction` afterward, which would have made it a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero redundancy. The first sentence front-loads the classification ('Advanced') and core function; the second provides the alternative recommendation. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage and clear sibling differentiation, the description is nearly complete. It lacks only an explicit mention of the output (the unsigned transaction bytes/object) and the subsequent step of using `submit_transaction`, though this is partially implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (all 3 parameters documented), the baseline is 3. The description does not add parameter-specific semantics (e.g., explaining what 'deactivated' means for the stake account), but this is acceptable given the comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Build an unsigned withdraw transaction') and clearly identifies the resource (unsigned transaction for stake withdrawal). It explicitly distinguishes itself from the sibling `withdraw` tool by specifying 'local signing' vs automatic handling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent guidance: explicitly labels it 'Advanced' for the specific use case of local signing, and directly names the alternative tool (`withdraw`) that 'most agents should use instead' for automatic signing and submission.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources