Skip to main content
Glama

get_verification_artifact

Read-onlyIdempotent

Obtain a sparse verification artifact with raw calldata and decoding instructions for independent cross-check by a second LLM, enabling adversarial verification of transaction safety before Ledger approval.

Instructions

Return a sparse verification artifact for a prepared tx — raw calldata (or TRON rawDataHex), chain, to/value, payloadHash, preSignHash if preview_send has pinned gas, plus a static prompt instructing a second LLM on how to decode the bytes from scratch. Intended for adversarial independent verification: the user copies this artifact into a second LLM session (different provider recommended) so the second agent produces an independent decode with no shared context from the current conversation. If the two decodes disagree — or if the preSignHash doesn't match what Ledger displays at sign time — the user rejects. Does NOT call any external API; read-only in-memory lookup. Output deliberately omits the server's humanDecode, swiss-knife URL, and 4byte cross-check so the second agent cannot echo them. Handles live in-memory for 15 minutes after issue.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
handleYesOpaque handle returned by any prepare_* tool. Returns a sparse, copy-paste-friendly JSON artifact carrying the raw calldata (or TRON rawDataHex), chain, recipient, value, payloadHash, and — when preview_send has already pinned gas — the Ledger blind-sign preSignHash. A static prompt telling a second LLM how to independently decode the bytes is included. The artifact intentionally omits the server's humanDecode, swiss-knife URL, and 4byte cross-check so the second agent cannot parrot them.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses behavioral traits beyond annotations: it is read-only, operates in-memory, and artifacts are live for only 15 minutes. It also explicitly states the output deliberately omits certain fields to prevent echoing. These details add significant context that annotations alone do not provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is thorough but somewhat verbose. It front-loads the purpose and key contents, and each sentence adds value. However, it could be slightly more concise without losing essential details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description fully explains what the tool returns, including a list of fields and the static prompt. It also covers the artifact's lifespan and intentional omissions. This provides complete context for an agent to understand and use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although the input schema has full coverage (100%) with a description for 'handle', the tool description adds substantial meaning: it explains that the handle is opaque and returned by any prepare_* tool, and it describes the structure of the returned artifact. This enriches the parameter semantics beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a sparse verification artifact for a prepared transaction. It specifies the exact contents (raw calldata, chain, to/value, payloadHash, preSignHash, and a static prompt) and the intended use (adversarial independent verification). This distinguishes it from sibling tools like get_tx_verification or explain_tx, which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states the tool is intended for adversarial independent verification, instructing users to copy the artifact into a second LLM session from a different provider. It also clarifies that the tool does not call any external API and is a read-only in-memory lookup, setting clear expectations for when to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/szhygulin/vaultpilot-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server