Skip to main content
Glama

Blueprint Agentic Staking (Solentic)

Server Details

Native Solana staking for AI agents. 26 MCP tools, one-shot signing, webhooks.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
blueprint-infrastructure/solentic-mcp
GitHub Stars
0
Server Listing
Solentic

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

26 tools
check_address_typeCheck Address TypeA
Read-onlyIdempotent
Inspect

Detect whether a Solana address is a wallet, stake account, or vote account. Useful when you receive an address from user input and need to know what type it is before calling other tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesSolana address (base58 public key) to identify
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). The description adds valuable behavioral context by specifying the exact taxonomy of detectable types (wallet vs stake vs vote), which annotations don't provide. It could be improved by mentioning behavior with invalid addresses.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first states functionality, second states usage context. Every word earns its place with no redundancy or generic filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter classification tool with no output schema, the description adequately explains the return values by listing the three detectable account types. With full annotations and schema coverage, this is sufficient, though mentioning error behavior for invalid addresses would make it a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage ('Solana address (base58 public key) to identify'), so the baseline applies. The description doesn't add parameter-specific semantics beyond the schema, but doesn't need to given the complete schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Detect') with a clear resource ('Solana address') and explicitly enumerates the three possible classifications (wallet, stake account, vote account). It effectively distinguishes itself from siblings like check_balance or stake by focusing on type classification rather than operations or balances.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual guidance ('when you receive an address from user input') and explains the workflow position ('before calling other tools'). While it doesn't explicitly name specific alternative tools to use after classification, it establishes the decision-point role effectively.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_balanceCheck BalanceA
Read-onlyIdempotent
Inspect

Check the SOL balance of any Solana wallet address. Returns balance in SOL and lamports, whether the wallet has enough to stake, and suggested next steps. Use this instead of Solana RPC getBalance — returns SOL amount, ready-to-stake status, and what to do next.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesSolana wallet address (base58 public key) to check balance for
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint, idempotentHint, openWorldHint), the description discloses specific behavioral traits: dual-unit returns (SOL and lamports), business logic execution ('whether the wallet has enough to stake'), and advisory output ('suggested next steps'). Reveals the tool performs calculations and recommendations, not just raw data retrieval.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose declaration, output specification, and usage guidance. No redundancy or filler. Front-loaded with the core action 'Check the SOL balance'. Every sentence delivers distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates effectively for missing output schema by detailing return values (balance amounts, stake eligibility, next steps). For a single-parameter read-only tool with rich annotations, the description provides sufficient behavioral context and distinguishes from both external alternatives and siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete description of walletAddress (base58 public key). The description mentions 'any Solana wallet address' which confirms scope but adds minimal semantic detail beyond the schema. With full schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Check'), clear resource ('SOL balance'), and scope ('any Solana wallet address'). It distinguishes this from generic RPC calls and implies distinction from staking-specific siblings by mentioning 'ready-to-stake status'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit alternative ('Use this instead of Solana RPC getBalance') and explains the value-add ('returns SOL amount, ready-to-stake status, and what to do next'). Could be strengthened by explicitly contrasting with sibling 'check_stake_accounts', but covers the primary comparison well.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_stake_accountsCheck Stake AccountsA
Read-onlyIdempotent
Inspect

List all stake accounts delegated to Blueprint for a wallet. Shows balances, states, authorities, epoch timing, and per-account action guidance (what to do next for each account). Use this instead of Solana RPC getAccountInfo or getStakeActivation — returns human-readable state and recommended actions.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesWallet address to check for Blueprint stake accounts
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond annotations: specifies output includes 'per-account action guidance (what to do next for each account)' and emphasizes 'human-readable state' vs raw RPC data. Annotations cover safety profile (readOnly/idempotent), so description appropriately focuses on output characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence front-loads the core action and enumerates return fields (balances, states, authorities, epoch timing, action guidance). Second sentence provides usage guidance. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by detailing return contents (human-readable state, recommended actions, epoch timing). With single parameter and strong annotations covering safety/idempotency, description provides sufficient context for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with walletAddress fully described. Description implicitly connects the parameter to the function ('for a wallet'), but doesn't add syntax/format details beyond schema's maxLength: 50. Baseline 3 appropriate when schema carries full documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'List all stake accounts delegated to Blueprint for a wallet' provides clear verb (List), resource (stake accounts), and scope (Blueprint delegation). Distinguishes from siblings by specifying 'Blueprint' delegation and contrasts with external RPC methods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly directs users away from alternatives: 'Use this instead of Solana RPC getAccountInfo or getStakeActivation' with rationale ('returns human-readable state'). Loses one point for not explicitly differentiating from internal siblings like get_staking_summary or check_balance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_withdraw_readyCheck Withdraw ReadyA
Read-onlyIdempotent
Inspect

Check whether stake accounts are ready to withdraw. Returns per-account readiness with withdrawable epoch, estimated seconds remaining, and plain-English state description. Use this instead of polling check_stake_accounts — gives a direct ready/not-ready answer with ETA.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesWallet address to check withdrawal readiness for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly/idempotent), but description adds valuable behavioral context: it discloses return structure (per-account readiness, withdrawable epoch, estimated seconds remaining, plain-English state) and reveals this is a polling alternative. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: sentence 1 states purpose, sentence 2 details return values, sentence 3 provides usage guideline. Front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema, description compensates by detailing return values (epoch, seconds remaining, state descriptions). Annotations are comprehensive. Minor gap: does not mention behavior when no stake accounts exist or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('Wallet address to check withdrawal readiness for'), establishing baseline 3. Description implies the wallet contains multiple stake accounts but does not add syntax details, constraints, or examples beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb ('Check') + resource ('stake accounts') + specific condition ('ready to withdraw'). It also distinguishes itself from sibling 'check_stake_accounts' by explaining it gives a 'direct ready/not-ready answer with ETA' rather than raw data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use this instead of polling check_stake_accounts' with clear rationale ('gives a direct ready/not-ready answer with ETA'). Provides clear when-to-use guidance relative to a specific alternative sibling tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_stake_transactionCreate Stake TransactionAInspect

Advanced: Build an unsigned stake transaction for local signing. Most agents should use the stake tool instead, which handles signing and submission automatically. This tool is for agents that manage their own signing infrastructure.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountSolYesAmount of SOL to stake (minimum 0.00228288 SOL for rent exemption)
walletAddressYesSolana wallet address (base58 public key) that will fund and control the stake
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral context beyond annotations: explains the output is 'unsigned' and intended for 'local signing', contrasting with the automatic submission workflow. Annotations cover safety profile (readOnly:false, destructive:false), but description could clarify what specific side effects occur since readOnlyHint is false.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste. Front-loaded with the action ('Build an unsigned stake transaction'), followed by immediate alternative guidance and target audience. 'Advanced:' prefix efficiently signals complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with 100% schema coverage, the description adequately covers intent and workflow. No output schema exists, but the phrase 'for local signing' combined with context of sibling `submit_transaction` tool implies the return value sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (walletAddress and amountSol both documented). Description provides baseline 3 as it doesn't need to compensate for schema gaps, nor does it add conflicting information, but doesn't add syntax details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Build') and objects ('unsigned stake transaction') and clearly distinguishes from the sibling `stake` tool by specifying 'local signing' vs automatic handling. The scope is precisely defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: states 'Most agents should use the `stake` tool instead' and defines the specific audience ('agents that manage their own signing infrastructure'). Names the alternative tool directly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_unstake_transactionCreate Unstake TransactionAInspect

Advanced: Build an unsigned unstake transaction for local signing. Most agents should use the unstake tool instead, which handles signing and submission automatically.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesWallet address that is the stake authority
stakeAccountAddressYesStake account address to deactivate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial context beyond annotations: specifies output is 'unsigned' and requires 'local signing', which clarifies this is a preparatory step not affecting chain state (consistent with annotations). However, doesn't explain why readOnlyHint is false (unusual for a builder) or idempotency implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with 'Advanced' warning to immediately signal complexity. Second sentence provides alternative tool. Dense but readable structure where every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong coverage of workflow (build unsigned → local sign) and clear differentiation from sibling `unstake`. Minor gap: doesn't hint at output format (transaction bytes? JSON?) despite no output schema, though 'unsigned unstake transaction' implies the type.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both walletAddress and stakeAccountAddress well-described in schema). Description doesn't redundantly explain parameters, which is appropriate when schema carries full semantics. Baseline 3 for high-coverage schemas per rubric.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Build an unsigned unstake transaction for local signing' combines specific verb (build) with resource (unstake transaction) and scope (unsigned/for local signing). The 'Advanced' tag and explicit distinction from sibling tool `unstake` perfectly clarifies target use case.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit when-not-to-use guidance: 'Most agents should use the `unstake` tool instead' clearly directs users to the simpler alternative. It explains the trade-off (automatic signing/submission vs manual local signing), providing complete decision context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_webhookDelete WebhookAInspect

Delete a webhook registration by ID. Use list_webhooks to find webhook IDs.

ParametersJSON Schema
NameRequiredDescriptionDefault
webhookIdYesWebhook registration ID to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=false and destructiveHint=false, establishing this is a state-modifying but non-destructive operation. The description confirms the deletion action but does not add context about side effects, idempotency (already false in annotations), or what happens if the ID doesn't exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes the core operation, second provides essential navigation to the prerequisite tool. Perfectly front-loaded and appropriately sized for a single-parameter deletion tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete-by-ID operation with clear annotations and no output schema, the description covers the essential workflow. Minor gap: does not describe error behavior (e.g., invalid ID) or confirm whether deletion is immediate/irreversible, though destructiveHint=false provides some safety context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with webhookId fully documented as 'Webhook registration ID to delete'. The description references 'by ID' which aligns with this parameter but adds no additional semantic information about format, validation rules, or constraints beyond what the schema provides. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the action (Delete), resource (webhook registration), and scope (by ID). It clearly distinguishes this tool from siblings like register_webhook and list_webhooks by specifying the destructive/mutation action on a specific registration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow guidance by naming the sibling tool list_webhooks as the method to obtain the required webhookId parameter. This creates a clear when-to-use pathway (after listing) and implies the prerequisite step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_walletGenerate WalletA
Read-onlyIdempotent
Inspect

Get instructions and code to generate a Solana wallet locally. Generate the keypair in YOUR execution environment — not on Blueprint servers. After generating, fund the wallet, then use the stake tool with your walletAddress + secretKey to stake in one call. Your secret key is used in-memory only for signing and is never stored.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the readOnly/destructive annotations, the description adds critical behavioral context: execution happens in the user's environment ('not on Blueprint servers') and specifies security guarantees ('secret key is used in-memory only...never stored'). This clarifies that despite the 'generate' name, no state is persisted server-side.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently cover: (1) core purpose, (2) execution environment security, and (3) workflow/next steps with security guarantees. Every sentence adds unique value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description adequately explains the return value conceptually ('instructions and code'). It covers the security model and workflow integration. A minor gap is the lack of specificity about code format (e.g., language) or sample outputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Per the rubric, 0 parameters warrants a baseline score of 4. The description appropriately requires no parameter clarification since the schema contains no properties to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'instructions and code to generate a Solana wallet locally,' using specific verbs (Get/generate) and resource types. It effectively distinguishes this from sibling tools like `stake` or `check_balance` by emphasizing it provides code rather than performing blockchain operations itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description establishes a clear workflow sequence (generate → fund → use `stake` tool) and explicitly references the sibling `stake` tool by name. However, it lacks explicit 'when not to use' guidance or alternative wallet import methods.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_epoch_timingGet Epoch TimingA
Read-onlyIdempotent
Inspect

Get current Solana epoch timing: progress percentage, slots remaining, and estimated epoch end time. Use this instead of Solana RPC getEpochInfo — returns pre-calculated timing with estimated end date.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish safety profile (readOnly, idempotent). The description adds valuable behavioral context by disclosing this returns 'pre-calculated' data rather than live RPC calls, explaining the intermediate data layer. Could enhance further by mentioning cache freshness or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently constructed sentences with zero redundancy. First sentence specifies the data retrieval action and return fields; second sentence provides usage guidance and alternative comparison. Information density is optimal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameter-free read operation with no output schema, the description is complete. It compensates for the missing output schema by enumerating the specific timing fields returned and clarifies the data source distinction from standard RPC calls.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool accepts zero parameters, meeting the baseline score of 4 per rubric. No parameter documentation is required, and the description wisely focuses on return value semantics rather than inventing non-existent inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the scope with specific metrics (progress percentage, slots remaining, estimated end time) and identifies the Solana domain, distinguishing it from sibling tools focused on staking, transactions, or wallet management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit comparative guidance: 'Use this instead of Solana RPC getEpochInfo' and explains the value proposition ('returns pre-calculated timing with estimated end date'), giving clear context for when to select this tool over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_infrastructureGet InfrastructureA
Read-onlyIdempotent
Inspect

Get Blueprint validator infrastructure specs: server hardware, redundancy configuration, network, and storage. Two bare-metal servers (active + hot standby).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent/destructive hints, so the safety profile is covered. Description adds valuable content context by revealing the infrastructure topology (two bare-metal servers, active+hot standby model) that helps the agent understand what data structure/content to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Front-loaded with action and resource, followed by content categories, then specific infrastructure details. Every clause provides distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and read-only annotations, description adequately prepares agent for return content by enumerating spec categories and infrastructure topology. Lacks explicit return format/pagination details, but sufficient for a simple parameterless info retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present. Per scoring rules, baseline score of 4 applies when no parameters require semantic documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' paired with precise resource 'Blueprint validator infrastructure specs'. Lists concrete categories (hardware, redundancy, network, storage) and adds distinguishing details about the setup (bare-metal servers, active/hot standby) that differentiate it from sibling tools like get_validator_info or get_performance_metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus siblings like get_validator_info (likely software/identity metadata) or get_performance_metrics (runtime statistics). No mention of prerequisites or use case scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_performance_metricsGet Performance MetricsA
Read-onlyIdempotent
Inspect

Get Blueprint validator performance: vote success rate, uptime, skip rate, epoch credits, delinquency status.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnlyHint, idempotentHint), so the bar is lower. The description adds valuable context by listing the specific performance metrics returned (vote success, uptime, etc.), effectively documenting the output fields without an output schema. It does not mention rate limits or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence with the action verb 'Get' front-loaded. Every word serves a purpose—the colon-separated list efficiently communicates the returned data fields without verbose explanation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and rich annotations covering behavioral traits, the description appropriately compensates for the missing output schema by listing the specific metrics returned. It is complete enough for selection but could benefit from noting the data format or units.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per the baseline rule, this warrants a score of 4. The description correctly does not invent parameter documentation where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves 'Blueprint validator performance' and enumerates specific metrics (vote success rate, uptime, skip rate, epoch credits, delinquency status). However, it does not explicitly differentiate from the sibling tool 'get_validator_info', leaving ambiguity about when to choose this specific endpoint.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_validator_info' or 'get_staking_summary'. It lacks prerequisites, conditions, or contextual triggers for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_staking_apyGet Staking APYA
Read-onlyIdempotent
Inspect

Get live APY breakdown: base staking APY + Jito MEV APY = total APY. Includes commission rates. Data from StakeWiz API.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds crucial behavioral context beyond annotations: it discloses the external data dependency (StakeWiz API), explains the compositional formula for total APY (base + Jito MEV), and notes the inclusion of commission rates. Annotations only indicate safety/read-only status, while the description reveals the data lineage and calculation method.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. The first front-loads the action and explains the data composition (base + Jito MEV = total), while the second adds essential context (commissions, API source). Every clause earns its place—no redundancy with the tool name or title.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately covers what semantic values are returned (base APY, Jito MEV APY, commission rates) and their origin. For a zero-parameter, read-only data retrieval tool, this explains the conceptual output sufficiently, though explicit field names would elevate it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per rubric guidelines, zero-parameter tools receive a baseline score of 4. The description correctly does not invent fictional parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Get' live data) combined with clear resource (APY breakdown) and scope (base + Jito MEV components). The explicit mention of 'StakeWiz API' and 'commission rates' distinguishes this from sibling tools like get_staking_summary (user-specific) or simulate_stake (transaction simulation).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies what the tool retrieves (live APY breakdown with commissions) implying use when current market rates are needed, but lacks explicit 'when not to use' guidance or named alternatives (e.g., contrast with get_staking_summary for personal stake data). Usage is implied through specificity rather than explicit rules.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_staking_summaryGet Staking SummaryA
Read-onlyIdempotent
Inspect

Complete staking portfolio dashboard in a single call. Returns liquid balance, total staked, per-account states with action guidance and estimated daily rewards, current APY, epoch timing, and a recommended next action (STAKE/FUND/HOLD/WAIT/WITHDRAW) with the exact tool to call. Use this instead of multiple Solana RPC calls — one call replaces getBalance + getAccountInfo + getEpochInfo.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesSolana wallet address (base58 public key) to get staking summary for
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare read-only/idempotent safety, the description adds substantial behavioral context: specific return fields (estimated daily rewards, current APY), decision logic (recommended next actions STAKE/FUND/HOLD/WAIT/WITHDRAW), and output format (exact tool to call). This reveals business logic not present in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. First sentence front-loads value proposition and complete return specification; second sentence provides clear sibling differentiation. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description comprehensively documents the complex return structure including specific action enums and tool recommendations. For a single-parameter dashboard tool, this provides sufficient context for correct invocation and result handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (walletAddress fully described in schema). The description does not add additional parameter constraints, format details, or examples beyond what the schema provides, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly defines the tool as a 'Complete staking portfolio dashboard' and lists specific outputs (liquid balance, total staked, per-account states, APY, epoch timing, recommended actions). It clearly distinguishes from siblings by stating it replaces multiple Solana RPC calls including getBalance and getAccountInfo.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly prescribes usage: 'Use this instead of multiple Solana RPC calls' and specifies exactly what it replaces ('one call replaces getBalance + getAccountInfo + getEpochInfo'). Clear guidance on when to prefer this over individual check_balance or get_epoch_timing calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_validator_infoGet Validator InfoA
Read-onlyIdempotent
Inspect

Get Blueprint validator profile: identity, vote account, commission, active stake, APY, performance, software, location. Live data from StakeWiz API.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnlyHint=true, destructiveHint=false) and consistency (idempotentHint=true). Description adds 'Live data from StakeWiz API' which discloses external dependency and data freshness, but lacks rate limits, caching behavior, or error conditions. Adds some value beyond annotations but could disclose more behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with specific return fields. Data source citation earns its place by indicating external API dependency. No redundancy with title or schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema, description enumerates expected return fields (identity, vote account, etc.) effectively documenting the response structure. Annotations cover operational characteristics. Adequately complete for a read-only, zero-parameter info retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present in schema. Per rubric, baseline score is 4 for empty parameter sets. Description correctly does not invent false parameters and implies the tool retrieves current validator data without filters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states exact resource (Blueprint validator profile), comprehensive field list (identity, vote account, commission, APY, etc.), and data source (StakeWiz API). Clearly distinguishes from siblings like get_staking_apy or get_performance_metrics which return single metrics rather than full profiles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance. Does not mention when to prefer this over sibling tools like get_staking_apy or check_stake_accounts. While the comprehensive field list implies use for full profiles, explicit alternatives or prerequisites are absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_webhooksList WebhooksB
Read-onlyIdempotent
Inspect

List all registered webhooks for a wallet address.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesWallet address to list webhooks for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, idempotent, and non-destructive behavior. The description adds that it lists 'all registered' webhooks, confirming comprehensiveness, but does not elaborate on response format, pagination, or webhook lifecycle details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient 8-word sentence with no redundancy. It immediately conveys the tool's function without filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, read-only operation) and comprehensive annotations, the description is sufficient. It does not need to explain return values since no output schema exists to constrain them.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents the walletAddress parameter. The description mentions 'wallet address' but adds no semantic details, examples, or format guidance beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (List), resource (registered webhooks), and scope (for a wallet address). However, it does not explicitly differentiate from sibling tools like register_webhook or delete_webhook, which would earn a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like register_webhook, nor does it mention prerequisites or filtering capabilities beyond the wallet address.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_webhookRegister WebhookAInspect

Register a callback URL to receive push notifications when stake state changes. Events: withdraw_ready (stake account becomes withdrawable), epoch_complete (new epoch starts), stake_activated (stake begins earning), stake_deactivated (unstake confirmed). Solentic polls every 60 seconds and POSTs to your URL when events fire.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoAlias for callbackUrl — either field works
eventsYesEvent types to subscribe to
callbackUrlNoHTTPS callback URL to receive webhook POST requests
walletAddressYesWallet address to monitor for state changes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (which indicate external/open-world mutation), the description adds critical operational details: the 60-second polling interval, POST method to the URL, and specific event semantics. It clarifies the delivery mechanism that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero redundancy: purpose statement upfront, event enumeration with inline definitions, and operational mechanism (polling/POST). Every clause delivers essential information without filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description adequately covers the operational contract (events, polling frequency, HTTPS requirement). It could be improved by mentioning retry behavior or registration limits, but captures the essential integration pattern.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description elevates this by providing semantic mappings for each event type (e.g., 'withdraw_ready (stake account becomes withdrawable)'), adding business context beyond the schema's simple enum listing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Register a callback URL), resource (webhook), and trigger condition (stake state changes). It effectively distinguishes from sibling tools like list_webhooks or delete_webhook by emphasizing the registration/setup nature of the operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies the use case (monitoring stake events), it lacks explicit guidance on when to use this push-based approach versus polling alternatives like check_stake_accounts or check_withdraw_ready. No prerequisites or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

simulate_stakeSimulate StakeA
Read-onlyIdempotent
Inspect

Project staking rewards before committing capital. Returns compound interest projections (daily/monthly/annual/total), effective APY after compounding, activation timing, fee reserve guidance, and a natural-language recommendation. Use this to help decide how much to stake and for how long.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountSolYesAmount of SOL to simulate staking
durationDaysNoProjection duration in days (default: 365)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish readOnly/idempotent safety, but the description adds substantial value by detailing the specific return payload types (daily/monthly/annual projections, effective APY, activation timing, fee guidance, natural-language recommendation) to compensate for the missing output schema. It reinforces the simulation nature with 'before committing capital'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently structured: first sentence declares function and detailed return values; second sentence provides usage context. Zero redundancy, appropriately front-loaded with specific outputs, every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description comprehensively lists projected return values and guidance types. Combined with the simple 2-parameter structure and read-only annotations, the description provides sufficient context for an agent to understand both inputs and outputs without additional schema support.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both amountSol and durationDays fully documented). The description references 'how much to stake and for how long' which loosely maps to parameters but provides no additional syntax, validation constraints, or format details beyond the schema. Appropriate baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Project', 'Returns') with clear resource ('staking rewards'). It effectively distinguishes from sibling tools by emphasizing 'before committing capital' and 'simulate', clearly differentiating it from the actual 'stake' and 'create_stake_transaction' tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The final sentence 'Use this to help decide how much to stake and for how long' provides clear context for when to invoke the tool (pre-commitment decision phase). However, it does not explicitly name the alternative 'stake' tool or explicitly state 'do not use for actual staking'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stakeStake SOL (one-shot)AInspect

Stake SOL with Blueprint validator in a single call. Builds the transaction, signs it with your secret key in-memory, and submits to Solana. Returns the confirmed transaction signature. Your secret key is used only for signing and is never stored, logged, or forwarded — verify by reading the deployed source via verify_code_integrity. This is the recommended tool for autonomous agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountSolYesAmount of SOL to stake
secretKeyYesYour base58-encoded secret key — used in-memory for signing only, never stored
walletAddressYesYour Solana wallet address (base58 public key)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Superb disclosure beyond annotations. Explicitly details the critical security model (in-memory secret key handling, no storage/logging) and provides a verification mechanism (verify_code_integrity). Also clarifies return value semantics (confirmed transaction signature) and blockchain interaction level.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence earns its place: (1) action definition, (2) technical execution steps, (3) return value, (4) security guarantee with verification path, (5) usage recommendation. Front-loaded with the core value proposition; no redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a blockchain mutation tool: covers security verification, confirmation level, and agent suitability. Minor gaps remain regarding network specification (mainnet/devnet), fee disclosure, and explicit minimum stake requirements, though exclusiveMinimum: 0 in schema partially addresses the latter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100%, the description adds crucial narrative context about secretKey security handling that reinforces the schema description. It also specifies the validator target (Blueprint) which is absent from the schema parameters, adding operational context not captured in structured fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states exact action (Stake SOL), target validator (Blueprint), and execution model (single call). Distinguishes from siblings like create_stake_transaction (build-only) and unstake by emphasizing the complete build-sign-submit lifecycle.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides strong contextual guidance by recommending it specifically for autonomous agents and positioning it as a one-shot convenience method. However, it doesn't explicitly contrast with the multi-step alternative (create_stake_transaction + submit_transaction) for users needing offline signing or hardware wallet support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_transactionSubmit TransactionA
Destructive
Inspect

Advanced: Submit a pre-signed transaction to Solana. Only needed if you used create_stake_transaction/create_unstake_transaction/withdraw_stake and signed locally. Most agents should use the one-shot stake/unstake/withdraw tools instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
signedTransactionYesFully signed transaction as a base64-encoded string
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish this is destructive, non-read-only, and open-world. The description adds valuable workflow context: the three-step pattern (create → sign locally → submit) and the 'Advanced' classification. Could be improved by mentioning fee consumption or irreversibility specifics, but provides good behavioral context for the target advanced use case.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfect conciseness: Two efficiently structured sentences. First sentence defines the tool and audience (Advanced), second provides precise workflow context and alternative recommendations. No redundancy or filler; every word serves selection or invocation guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a single-parameter submission tool. Explains the niche workflow gap between transaction creation tools and one-shot alternatives. With annotations covering safety profiles and no output schema present, the description satisfactorily covers contextual needs, though mentioning error scenarios (e.g., 'returns signature or error') would elevate to 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (signedTransaction fully described as 'base64-encoded string'). Description implies the parameter through 'pre-signed transaction' and 'signed locally' context, but doesn't add format details, constraints, or examples beyond what the schema provides. With full schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specific purpose: 'Submit a pre-signed transaction to Solana' clearly identifies the verb (submit), resource (transaction), and domain context. The 'Advanced:' prefix and explicit mention of sibling tools (create_stake_transaction, etc.) immediately distinguishes this from the simpler one-shot stake/unstake/withdraw tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Outstanding guidance: Explicitly states 'Only needed if you used create_stake_transaction/create_unstake_transaction/withdraw_stake and signed locally' and clearly recommends alternatives ('Most agents should use the one-shot stake/unstake/withdraw tools instead'). Covers both when-to-use and when-not-to-use with specific sibling tool names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unstakeUnstake SOL (one-shot)AInspect

Deactivate a stake account in a single call. Builds the transaction, signs it, and submits it. The stake enters a cooldown period (~1 epoch) and becomes withdrawable at the next epoch boundary. Use check_withdraw_ready to poll readiness, then withdraw to reclaim SOL. This is the recommended tool — use create_unstake_transaction only if you manage your own signing.

ParametersJSON Schema
NameRequiredDescriptionDefault
secretKeyYesYour base58-encoded secret key — used in-memory for signing only, never stored
walletAddressYesYour Solana wallet address (stake authority)
stakeAccountAddressYesStake account address to deactivate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a write operation (readOnlyHint: false) with external side effects (openWorldHint: true). The description adds crucial behavioral context not in annotations: the cooldown period ('~1 epoch'), timing ('becomes withdrawable at the next epoch boundary'), and that signing occurs in-memory. It does not explicitly address idempotency or detailed error states.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five sentences with zero waste. Front-loaded with the core action ('Deactivate...'), followed by mechanism, side effects, workflow next-steps, and sibling differentiation. Every sentence provides distinct value regarding behavior, timing, or tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a sensitive signing operation with no output schema, the description adequately covers the full lifecycle: deactivation, cooldown timing, polling for readiness, and final withdrawal. It addresses the security model (in-memory signing only) via the schema description, providing sufficient context for safe invocation despite the lack of structured output documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured data already documents all three parameters (walletAddress, secretKey, stakeAccountAddress) including the security note for secretKey. The description implies the stake account target but does not elaborate on parameter semantics beyond what the schema provides, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Deactivate[s] a stake account in a single call' and clarifies the mechanism ('Builds the transaction, signs it, and submits it'). It clearly distinguishes from sibling 'create_unstake_transaction' by noting this is the 'recommended' approach unless the user manages their own signing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Use check_withdraw_ready to poll readiness, then withdraw to reclaim SOL.' It explicitly states when to use the alternative ('use create_unstake_transaction only if you manage your own signing'), giving clear decision criteria between the one-shot and manual signing approaches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_code_integrityVerify Code IntegrityA
Read-onlyIdempotent
Inspect

Verify the code running on Blueprint servers. Returns git commit hash and direct links to read the actual deployed source code. Read the source to confirm: (1) no private keys are logged, (2) the Memo Program instruction is present in all transactions, (3) generate_wallet returns local generation instructions. Don't trust — read the code yourself via the source endpoints.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only/idempotent safety, while description adds critical specifics about return values (git commit hash, direct source links) and the 'Don't trust' verification pattern. Details exactly what must be confirmed in the source code, which is beyond annotation scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste. Opens with purpose, follows with return value, lists specific verification criteria, and closes with security imperative. Well front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description effectively covers return contents (commit hash, source links). Given annotations cover safety profile and parameter count is zero, the description provides sufficient context for invocation, though an explicit note that it requires no arguments could marginally improve it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present. Schema coverage is effectively complete. Description appropriately makes no mention of parameters, which is correct for a zero-argument tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (Verify), resource (code on Blueprint servers), and scope (server-side source integrity). Distinguished from sibling 'verify_transaction' by focusing on deployed source code rather than transaction validity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear security context: use this to audit that private keys aren't logged, Memo Program instructions are present, and wallet generation is local. Does not explicitly name sibling alternatives, but the imperatives clearly signal when to invoke (security audits) vs other operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_transactionVerify TransactionA
Read-onlyIdempotent
Inspect

Verify whether a Solana transaction was built through Blueprint. Checks the transaction on-chain for the "solentic.theblueprint.xyz" Memo Program instruction. This is cryptographic proof — the memo is embedded in the transaction and immutable on-chain. Use this to verify any claim that a stake was placed through Blueprint. Returns verified: true/false with the on-chain evidence.

ParametersJSON Schema
NameRequiredDescriptionDefault
signatureYesSolana transaction signature to verify
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond annotations: specifies the exact memo program ID ('solentic.theblueprint.xyz'), describes the cryptographic immutability guarantee, and documents return values (verified: true/false with evidence) despite no output schema being present. Aligns with annotations (readOnlyHint=true matches 'checks' and 'verify' operations).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tightly constructed sentences: purpose declaration, technical implementation, behavioral guarantee, and usage/return specification. Well front-loaded with the core action. Every sentence adds distinct value (mechanism, cryptographic properties, return format).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Richly complete for a verification tool: compensates for missing output schema by documenting return structure, leverages annotations (readOnly, idempotent, openWorld) to establish safety profile, and provides specific program identifiers needed for cryptographic verification. Adequate for correct agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('Solana transaction signature to verify'), establishing a baseline of 3. The description implies the signature is used for on-chain lookup but doesn't add syntax details, validation rules, or format constraints beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states the exact action (verify), resource (Solana transaction), scope (built through Blueprint), and mechanism (checks for Memo Program instruction). Clearly distinguishes from generic transaction checking tools like check_balance or submit_transaction by specifying the Blueprint-specific verification purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance: 'Use this to verify any claim that a stake was placed through Blueprint.' Establishes clear context for fraud prevention or audit scenarios. Lacks explicit when-NOT-to-use guidance or named alternatives (e.g., vs generic Solana explorers), but the specificity makes the intended use case clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

withdrawWithdraw SOL (one-shot)AInspect

Withdraw SOL from a deactivated stake account in a single call. Builds the transaction, signs it, and submits it. Funds are returned to your wallet. Use check_withdraw_ready first to confirm the account is ready. Omit amountSol to withdraw the full balance. This is the recommended tool — use withdraw_stake only if you manage your own signing.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountSolNoAmount to withdraw in SOL (omit to withdraw full balance)
secretKeyYesYour base58-encoded secret key — used in-memory for signing only, never stored
walletAddressYesYour Solana wallet address (withdraw authority)
stakeAccountAddressYesDeactivated stake account to withdraw from
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds workflow details beyond annotations ('Builds the transaction, signs it, and submits it'), explains outcome ('Funds are returned to your wallet'), and notes the signing behavior. No contradictions with readOnlyHint:false.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Six information-dense sentences with zero waste. Front-loaded with purpose, followed by mechanism, outcome, prerequisites, parameter tips, and sibling distinction. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a blockchain transaction tool: covers prerequisites, success outcome, parameter semantics, and alternative workflows. Absence of output schema is noted but description adequately explains the fund return behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage, adds integration context by emphasizing the 'deactivated' state requirement for the stake account and reinforcing the 'omit amountSol' pattern for full balance withdrawal.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Withdraw SOL), resource (deactivated stake account), and distinguishes from sibling 'withdraw_stake' by emphasizing 'one-shot' and 'recommended tool' status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names prerequisite tool ('Use check_withdraw_ready first') and alternative tool ('use withdraw_stake only if you manage your own signing'), providing clear when-to-use/when-not-to guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

withdraw_stakeWithdraw StakeAInspect

Advanced: Build an unsigned withdraw transaction for local signing. Most agents should use the withdraw tool instead, which handles signing and submission automatically.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountSolNoAmount to withdraw in SOL (omit or null to withdraw full balance)
walletAddressYesWallet address that is the withdraw authority
stakeAccountAddressYesDeactivated stake account to withdraw from
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral context beyond annotations: clarifies tool builds unsigned transactions requiring local signing, contrasting with `withdraw` which handles submission. Annotations cover openWorldHint/readOnly, but description explains the unsigned/signed workflow distinction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two highly efficient sentences. 'Advanced' front-loaded as complexity signal. First sentence states purpose; second provides guidance. No redundancy or waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers primary needs: purpose, sibling distinction, and workflow (unsigned/local). Prerequisites covered in schema (deactivated stake account). No output schema exists, so return value explanation not required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (all 3 params documented). Description provides workflow context but doesn't add parameter-specific semantics, syntax, or constraints beyond the schema. Baseline 3 appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Build' with clear resource 'unsigned withdraw transaction'. Explicitly distinguishes from sibling tool `withdraw` by contrasting functionality (local signing vs automatic handling).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'Advanced' flag and clear guidance: 'Most agents should use the `withdraw` tool instead'. Names the alternative and explains when to use each (this for local signing, `withdraw` for automatic signing/submission).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.