Skip to main content
Glama

Server Details

MCP server for Klever blockchain smart contract development.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
klever-io/mcp-klever-vm
GitHub Stars
3

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 16 of 16 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have clearly distinct purposes, such as get_balance for token balances, analyze_contract for code analysis, and init_klever_project for project scaffolding. However, there is some overlap between query_context and search_documentation, both of which search the knowledge base, which could cause confusion about which to use for specific queries.

Naming Consistency4/5

The naming follows a consistent verb_noun pattern throughout, such as get_balance, analyze_contract, and init_klever_project. Minor deviations exist, like add_helper_scripts (verb_adjective_noun) and enhance_with_context (verb_preposition_noun), but overall, the pattern is clear and predictable.

Tool Count4/5

With 16 tools, the count is slightly high but reasonable for the Klever VM domain, which covers blockchain queries, smart contract development, and knowledge base management. It provides comprehensive coverage without being overwhelmingly large, though it could be streamlined by merging overlapping tools.

Completeness5/5

The tool set offers complete coverage for Klever VM development, including project setup (init_klever_project, add_helper_scripts), contract analysis and querying (analyze_contract, query_sc), blockchain data retrieval (get_balance, get_transaction, get_block), and knowledge base access (query_context, search_documentation). No obvious gaps are present for the intended scope.

Available Tools

16 tools
add_helper_scriptsA
Read-onlyIdempotent
Inspect

Add build, deploy, upgrade, query, test, and interact automation scripts to an existing Klever smart contract project. Creates a scripts/ directory with bash scripts and updates .gitignore. Run this from the project root directory (where Cargo.toml is located). NOTE: In public profile, this tool returns a project template JSON and does not perform any filesystem changes.

ParametersJSON Schema
NameRequiredDescriptionDefault
contractNameNoThe contract name to embed in scripts (e.g. "my-token"). If omitted, auto-detected from the `name` field in Cargo.toml.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations: it specifies the tool creates a 'scripts/' directory, updates '.gitignore', and notes the public profile behavior (returns JSON without filesystem changes). This enhances understanding of the tool's effects without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: the first states the purpose, the second provides usage instructions, and the third gives a critical behavioral note. Each sentence adds essential information without redundancy, though it could be slightly more front-loaded by integrating the public profile note earlier.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (adds scripts to a project), rich annotations (readOnlyHint, idempotentHint), and no output schema, the description is largely complete. It covers purpose, usage context, and key behavioral nuances (public profile vs. filesystem changes). However, it lacks details on error conditions or what happens if the project already has scripts, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'contractName' fully documented in the schema. The description does not add any additional meaning or details about the parameter beyond what the schema provides (e.g., it doesn't explain how auto-detection works or provide examples beyond the schema). Baseline score of 3 is appropriate as the schema carries the burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Add build, deploy, upgrade, query, test, and interact automation scripts') and the target resource ('an existing Klever smart contract project'), distinguishing it from sibling tools like 'init_klever_project' (which creates a new project) and 'analyze_contract' (which analyzes rather than adds scripts). It precisely defines what the tool does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Run this from the project root directory') and provides a critical exclusion ('In public profile, this tool returns a project template JSON and does not perform any filesystem changes'), guiding the agent on context-specific behavior. It also implicitly distinguishes from 'init_klever_project' by specifying 'existing' project.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

analyze_contractA
Read-onlyIdempotent
Inspect

Analyze Klever smart contract Rust source code for common issues. Checks for missing imports, missing #[klever_sc::contract] macro, missing endpoint annotations, payable handlers without call_value usage, storage mappers without #[storage_mapper], and missing event definitions. Returns findings with severity (error/warning/info) and links to relevant knowledge base entries.

ParametersJSON Schema
NameRequiredDescriptionDefault
sourceCodeYesThe full Rust source code of the Klever smart contract to analyze. Must be valid Rust code using klever_sc imports.
contractNameNoHuman-readable name for the contract (used in output labeling). Defaults to "contract" if omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable behavioral context by specifying what the analysis checks for (e.g., missing imports, macros, annotations) and the output format (findings with severity levels and knowledge base links), which goes beyond the safety profile covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and specific checks, the second describes the output format. Every element adds value without redundancy, and it's front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (static analysis with multiple checks), the description is reasonably complete: it specifies the analysis scope, lists key checks, and describes the output format. However, without an output schema, it could benefit from more detail on the structure of 'findings' (e.g., line numbers, specific error messages). The annotations cover safety aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters (sourceCode and contractName). The description doesn't add any parameter-specific semantics beyond what's in the schema, but it implies the sourceCode must be valid Rust with klever_sc imports, which aligns with the schema. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('analyze') and resource ('Klever smart contract Rust source code'), and explicitly lists the checks performed (missing imports, missing macros, endpoint annotations, etc.). It distinguishes itself from sibling tools by focusing on static code analysis rather than account queries, transaction lookups, or project initialization.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying it analyzes 'Klever smart contract Rust source code' and checks for framework-specific issues, but it doesn't explicitly state when to use this tool versus alternatives like 'query_sc' or 'search_documentation'. It provides clear input requirements but lacks explicit comparison with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

enhance_with_contextA
Read-onlyIdempotent
Inspect

Augment a natural-language query with relevant Klever VM knowledge base context. Extracts Klever-specific keywords, finds matching entries, and returns the original query combined with relevant code examples and documentation in markdown. Use this to enrich a user prompt before answering Klever development questions.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesThe user's natural-language question or prompt to enhance (e.g. "How do I handle KLV payments in my contract?").
autoIncludeNoWhen true (default), automatically appends the most relevant knowledge base entries to the response. Set to false to only return metadata without injecting context.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide: it explains that the tool extracts Klever-specific keywords, finds matching knowledge base entries, and returns the original query combined with code examples and documentation in markdown format. While annotations cover safety (readOnlyHint, destructiveHint) and idempotency, the description provides operational details about the augmentation process.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured: two sentences that efficiently explain what the tool does and when to use it. Every word earns its place with no redundancy or unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with comprehensive annotations and full schema coverage but no output schema, the description provides good context about the augmentation process and return format (markdown with combined content). It could be slightly more complete by mentioning what happens when no matches are found or providing more detail about the augmentation algorithm.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents both parameters. The description doesn't add any additional parameter semantics beyond what's in the schema descriptions, so it meets the baseline expectation but doesn't provide extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Augment a natural-language query with relevant Klever VM knowledge base context'), the resource ('Klever VM knowledge base'), and distinguishes from siblings by specifying it's for enriching queries before answering Klever development questions. It explicitly mentions extracting keywords, finding matching entries, and returning combined content in markdown.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use this to enrich a user prompt before answering Klever development questions.' It also distinguishes from potential alternatives like 'query_context' or 'search_documentation' by specifying it's for augmentation rather than direct querying or searching.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_similarA
Read-onlyIdempotent
Inspect

Find knowledge base entries similar to a given entry by comparing tags and content. Returns related contexts ranked by similarity score. Useful for discovering related patterns, examples, or documentation after finding one relevant entry.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesThe context ID to find similar entries for. Obtain from query_context or get_context results.
limitNoMaximum number of similar entries to return. Typical range is 1-20; higher values may be slower. Default: 5.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains that the tool compares 'tags and content' and returns 'ranked by similarity score', which clarifies how similarity is determined and the output format. Annotations already indicate it's read-only, non-destructive, and idempotent, so the description doesn't need to repeat those safety aspects, but it usefully supplements with operational details not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, followed by key behavioral details, and ends with a usage guideline, all in two efficient sentences with no wasted words. Each sentence adds distinct value, making it easy for an AI agent to quickly grasp the tool's function and context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema) and rich annotations (read-only, non-destructive, etc.), the description is mostly complete. It covers purpose, behavior, and usage context well. However, without an output schema, it could benefit from more detail on the return format (e.g., structure of similarity scores), slightly limiting completeness for an agent invoking the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the parameters (id and limit). The description does not add any additional meaning or details about the parameters beyond what's in the schema, such as explaining the similarity algorithm or specific use cases for the limit. Thus, it meets the baseline score of 3 without compensating for any gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('find knowledge base entries similar to a given entry') and resources ('knowledge base entries'), distinguishing it from siblings like query_context or get_context by focusing on similarity rather than direct retrieval. It explicitly mentions comparing 'tags and content' and returning 'related contexts ranked by similarity score', making the purpose distinct and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('useful for discovering related patterns, examples, or documentation after finding one relevant entry'), implying it should be used after identifying an entry via tools like query_context or get_context. However, it does not explicitly state when not to use it or name specific alternatives among siblings, such as search_documentation for broader searches, which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_accountA
Read-onlyIdempotent
Inspect

Get full account details for a Klever blockchain address including nonce, balance, frozen balance, allowance, and permissions. Use this when you need comprehensive account state beyond just the balance.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesKlever address (klv1... bech32 format).
networkNoNetwork to query. Options: "mainnet", "testnet", "devnet", "local". Defaults to server default (mainnet).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, idempotent, and non-destructive behavior. The description adds valuable behavioral context by specifying exactly what data fields are returned, compensating for the lack of output schema. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. The first front-loads the specific output fields, while the second provides usage guidance. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description appropriately lists the return fields to compensate. Parameters are fully covered by schema, and annotations cover safety aspects. Complete for a read-only lookup tool, though it could mention the network default behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete descriptions for both 'address' (format specified) and 'network' (enum values listed). The description focuses on return values rather than parameter semantics, so it appropriately rests at the baseline 3 for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Get' with the resource 'account details' and explicitly lists the returned fields (nonce, balance, frozen balance, allowance, permissions). It clearly distinguishes from sibling 'get_balance' by emphasizing 'full' details and 'beyond just the balance'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The second sentence provides explicit contextual guidance: 'Use this when you need comprehensive account state beyond just the balance.' This implies when NOT to use it (for simple balance checks) and suggests the alternative, though it doesn't explicitly name 'get_balance'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_asset_infoA
Read-onlyIdempotent
Inspect

Get complete properties and configuration for any asset on the Klever blockchain (KLV, KFI, KDA tokens, NFT collections). Returns supply info, permissions (CanMint, CanBurn, etc.), roles, precision, and metadata. Note: string fields like ID, Name, Ticker are base64-encoded in the raw response.

ParametersJSON Schema
NameRequiredDescriptionDefault
assetIdYesAsset identifier (e.g. "KLV", "KFI", "USDT-A1B2", "MYNFT-XY78").
networkNoNetwork to query. Options: "mainnet", "testnet", "devnet", "local". Defaults to server default (mainnet).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive) and open-world access. Description adds crucial behavioral detail: 'string fields like ID, Name, Ticker are base64-encoded in the raw response'—critical for correct consumption of output that annotations don't provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly crafted sentences: 1) action and scope, 2) return value details, 3) critical encoding warning. Every sentence earns its place with zero redundancy. Front-loaded with the essential verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Rich description compensates for missing output schema by listing specific return fields (supply, permissions, roles, precision). Combined with comprehensive annotations, this provides sufficient context for a read-only lookup tool. Minor gap: no mention of rate limits or error conditions for invalid asset IDs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for both assetId (with concrete examples) and network (with enum values). Description provides no additional parameter semantics, but baseline 3 is appropriate when schema documentation is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specification: 'Get complete properties and configuration for any asset' provides specific verb and resource. Distinguishes from siblings like get_balance or get_account by enumerating specific return data (supply info, permissions, roles, precision) and asset types (KLV, KFI, KDA tokens, NFT collections).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context by detailing comprehensive property retrieval (permissions, metadata, roles) versus simpler queries, but lacks explicit 'when to use vs alternatives' guidance. No explicit guidance on when to prefer get_balance over this for simple balance checks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_balanceA
Read-onlyIdempotent
Inspect

Get the KLV or KDA token balance for a Klever blockchain address. Returns the balance in the smallest unit (for KLV: 1 KLV = 1,000,000 units with 6 decimal places). Optionally specify an asset ID to query a specific KDA token balance instead of KLV.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesKlever address (klv1... bech32 format).
assetIdNoOptional KDA token ID (e.g. "USDT-A1B2", "LPKLVKFI-3I0N"). Omit for KLV balance.
networkNoNetwork to query. Options: "mainnet", "testnet", "devnet", "local". Defaults to server default (mainnet).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/non-destructive safety profile. Description adds critical behavioral context not in annotations: the return value precision ('smallest unit', '1 KLV = 1,000,000 units with 6 decimal places'), which is essential for correct financial calculation. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently structured: first states purpose, second combines return format and optional parameter logic. Zero redundancy; every word serves the description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter read-only query with 100% schema coverage, the description appropriately covers the unit conversion critical for blockchain financial operations. No output schema exists, so noting the return format is necessary and present. Could marginally improve by noting behavior on invalid addresses.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for all three parameters (address format, assetId examples, network options). Description references address and assetId but does not add significant semantic detail beyond what's already specified in the schema descriptions, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + specific resource 'KLV or KDA token balance' + scope 'for a Klever blockchain address'. Clearly distinguishes from sibling get_account (general account data) and get_asset_info (asset metadata) by focusing specifically on fungible token balances for an address.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear internal usage guidance for the assetId parameter ('Optionally specify... instead of KLV'), explaining the default vs. optional behavior. However, lacks explicit comparison to siblings (e.g., when to use this vs get_account) or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_blockA
Read-onlyIdempotent
Inspect

Get block information from the Klever blockchain by nonce (block number). If no nonce is provided, returns the latest block. Returns hash, timestamp, proposer, number of transactions, and other block metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
nonceNoBlock number (nonce). Omit to get the latest block.
networkNoNetwork to query. Options: "mainnet", "testnet", "devnet", "local". Defaults to server default (mainnet).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent safety. Description adds valuable behavioral details: conditional logic (omit nonce → latest block) and return value structure (hash, timestamp, proposer, tx count, metadata) which is critical given no output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste: 1) purpose + resource + primary param, 2) conditional behavior/default, 3) return value disclosure. Every sentence earns its place with high information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a read-only query tool: compensates for missing output schema by listing return fields (hash, timestamp, etc.), leverages 100% schema coverage for params, and annotations cover safety profile. No gaps remain for an agent to use this effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description reinforces 'nonce' as 'block number' and explains the omission behavior, adding semantic context beyond the schema's pure type definitions. Network parameter not mentioned but fully documented in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent: specific verb 'Get', clear resource 'block information from the Klever blockchain', identifies key parameter 'by nonce (block number)', and distinguishes from siblings like get_transaction, get_account by focusing on block-level data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clear context with explicit default behavior: 'If no nonce is provided, returns the latest block' explains when/how to use. Would be 5 if it explicitly compared to get_transaction for transaction-level vs block-level queries, but stands alone well.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_contextA
Read-onlyIdempotent
Inspect

Retrieve a single knowledge base entry by its unique ID. Returns the full entry including content, metadata, tags, and related context IDs. Use this after query_context or find_similar to get complete details for a specific entry.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesThe unique context ID (UUID format). Obtain IDs from query_context or find_similar results.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the bar is lower. The description adds valuable context by specifying what is returned ('full entry including content, metadata, tags, and related context IDs') and the typical workflow, enhancing transparency without contradicting annotations. It doesn't mention rate limits or auth needs, but that's acceptable given the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a concise usage guideline. Both sentences earn their place by providing essential information without waste, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, no output schema), rich annotations, and high schema coverage, the description is nearly complete. It covers purpose, usage, and return content adequately. A minor gap is the lack of explicit error handling or format details, but overall it provides sufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'id' parameter well-documented in the schema. The description adds minimal semantic value beyond the schema by reiterating that IDs come from 'query_context or find_similar', but this is redundant with the schema's description. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve a single knowledge base entry'), resource ('by its unique ID'), and scope ('full entry including content, metadata, tags, and related context IDs'). It explicitly distinguishes from sibling tools by mentioning 'query_context' and 'find_similar' as sources for IDs, making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('after query_context or find_similar to get complete details for a specific entry'), clearly positioning it as a follow-up for detailed retrieval. It implies alternatives by referencing sibling tools for initial queries, though it doesn't explicitly state when not to use it, but the context is sufficient for full credit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_knowledge_statsA
Read-onlyIdempotent
Inspect

Get summary statistics of the Klever VM knowledge base. Returns total entry count, counts broken down by context type (code_example, best_practice, security_tip, etc.), and a sample entry title for each type. Useful for understanding what knowledge is available before querying.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), but the description adds valuable context by specifying the exact statistics returned (counts, breakdowns, sample titles) and the tool's exploratory purpose. It doesn't contradict annotations and enhances understanding of what the tool provides beyond basic safety hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the action and detailed output, the second provides usage guidance. Every word adds value, with no redundancy or fluff, making it easy to parse and apply.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless, read-only tool with good annotations, the description is largely complete. It explains the output in detail (compensating for no output schema) and gives clear usage context. A minor gap is lack of explicit mention of error cases or performance characteristics, but overall it's well-suited to the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is high. The description reinforces this by not mentioning any parameters, which is appropriate for a parameterless tool. It focuses instead on output semantics, which is helpful given the lack of an output schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get summary statistics') and resource ('Klever VM knowledge base'), distinguishing it from siblings like query_context or search_documentation by focusing on metadata rather than content retrieval. It explicitly lists the returned data (total entry count, breakdowns by context type, sample titles), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Useful for understanding what knowledge is available before querying.' This directly contrasts with siblings like query_context or search_documentation that are for actual content queries, giving clear context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_transactionA
Read-onlyIdempotent
Inspect

Get transaction details by hash from the Klever blockchain. Returns sender, receiver, status, block info, contracts, and receipts. Uses the API proxy for indexed data.

ParametersJSON Schema
NameRequiredDescriptionDefault
hashYesTransaction hash (hex string).
networkNoNetwork to query. Options: "mainnet", "testnet", "devnet", "local". Defaults to server default (mainnet).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable return payload documentation ('sender, receiver, status, block info, contracts, and receipts') since no output schema exists. Also discloses implementation detail ('Uses the API proxy for indexed data'). Annotations cover safety profile (readOnly/idempotent), so description appropriately focuses on data structure and source rather than repeating safety traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences: purpose declaration, return value enumeration, and implementation note. Every sentence earns its place with zero redundancy. Information is front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully compensates for missing output schema by exhaustively listing return fields. Combined with 100% schema coverage and complete annotations (safety hints), the description provides sufficient context for a lookup tool without requiring external documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed enums and types. Description mentions 'by hash' which aligns with the required parameter, but does not add syntax clarification or usage examples beyond the schema's 'hex string' specification. Baseline 3 appropriate for comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') + resource ('transaction details') + scope identifier ('by hash from the Klever blockchain'). Distinguishes from siblings like get_account, get_block, or get_asset_info by specifying the hash-based lookup method and blockchain context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through 'by hash' (prerequisite is having the hash), but lacks explicit when-to-use guidance or alternatives. Does not mention when to use this versus get_block (which may contain transactions) or clarify that a hash must be obtained first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

init_klever_projectA
Read-onlyIdempotent
Inspect

Scaffold a new Klever smart contract project using the SDK. Creates the Rust project structure via ksc new and generates automation scripts (build, deploy, upgrade, query, test, interact). Requires Klever SDK installed at ~/klever-sdk/. Run check_sdk_status first to verify. NOTE: In public profile, this tool returns a project template JSON and does not perform any filesystem changes.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesThe contract project name in kebab-case (e.g. "my-token", "nft-marketplace"). Used as the Cargo package name and directory name.
noMoveNoWhen true, keeps the project in the SDK output directory instead of moving it to the current working directory. Default: false.
templateNoProject template to scaffold from. "empty" creates a blank contract with just an init function. "adder" creates a simple counter example. Default: "empty".empty
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable context beyond this: it specifies the tool's dependency on the SDK installation, recommends a prerequisite check, and clarifies that in public profiles it only returns a template JSON without filesystem changes. This enhances understanding of the tool's operational constraints without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose. Each sentence adds value: the first explains the action, the second covers prerequisites, and the third provides critical context about public profile behavior. There is no redundant information, though it could be slightly more streamlined by integrating the 'NOTE' more seamlessly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (project scaffolding with dependencies and context-specific behavior), the description is largely complete. It covers purpose, prerequisites, and behavioral nuances. However, without an output schema, it does not describe return values (e.g., what the 'project template JSON' contains), leaving a minor gap. Annotations provide safety and idempotency info, but the description compensates well with operational details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters (name, noMove, template). The description does not add any parameter-specific details beyond what the schema provides, such as explaining the implications of 'noMove' or 'template' choices. However, it meets the baseline of 3 since the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Scaffold a new Klever smart contract project') and the resources involved ('using the SDK', 'creates the Rust project structure', 'generates automation scripts'). It distinguishes this tool from siblings like 'analyze_contract' or 'query_sc' by focusing on project initialization rather than analysis or querying.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Requires Klever SDK installed at ~/klever-sdk/. Run check_sdk_status first to verify.' It also distinguishes behavior for different contexts with the 'NOTE' about public profiles, effectively indicating when-not scenarios. No alternatives are named, but the prerequisites and context exclusions are clearly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_validatorsA
Read-onlyIdempotent
Inspect

List active validators on the Klever blockchain network. Returns validator addresses, names, commission rates, delegation info, and staking amounts.

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoNetwork to query. Options: "mainnet", "testnet", "devnet", "local". Defaults to server default (mainnet).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Strong value-add beyond annotations: specifies 'active' filter state (not all validators), and details exact return payload (addresses, names, commission rates, delegation info, staking amounts) which compensates for missing output_schema. Annotations already cover readOnly/destructive/idempotent safety profile, so description doesn't need to repeat those.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. Front-loaded action ('List active validators...') followed by return specification. Every word earns its place; appropriate length for tool complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully complete for its complexity level: compensates for missing output schema by enumerating return fields; annotations handle behavioral safety; schema handles parameter docs. No gaps given the tool's scope (simple list operation with optional network filter).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with comprehensive enum documentation for the 'network' parameter. Description focuses entirely on behavior and return values, adding no parameter-specific semantics. Baseline 3 is appropriate since schema carries full burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'List' (verb) + 'active validators' (resource scoped to Klever blockchain). Distinguishes clearly from siblings like get_account, get_block, or get_transaction by targeting validator-specific data rather than general chain state.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage through clear functional description (you know to use it when you need validator lists), but lacks explicit 'when to use vs alternatives' guidance. For instance, it doesn't clarify when to use this versus get_account for validator-specific accounts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_contextA
Read-onlyIdempotent
Inspect

Search the Klever VM knowledge base for smart contract development context. Returns structured JSON with matching entries, scores, and pagination. Use this for precise filtering by type or tags; use search_documentation for human-readable "how do I..." answers.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoFilter by tags (e.g. ["storage", "mapper"], ["tokens", "KLV"], ["events"]). Tags are matched with OR logic — any matching tag includes the entry.
limitNoMaximum number of results to return (1-100). Default: 10.
queryNoFree-text search query. Use Klever-specific terms for best results (e.g. "storage mapper SingleValueMapper", "payable endpoint KLV", "deploy contract testnet").
typesNoFilter results by context type. Omit to search all types. Common combinations: ["code_example", "documentation"] for learning, ["error_pattern"] for debugging, ["security_tip", "best_practice"] for reviews.
offsetNoNumber of results to skip for pagination. Use with limit to page through results. Default: 0.
contractTypeNoFilter by contract type (e.g. "token", "nft", "defi", "dao"). Only returns entries tagged for this contract category.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and idempotent operations, the description reveals that the tool 'Returns structured JSON with matching entries, scores, and pagination.' This disclosure about the return format (structured JSON with scores and pagination) is not captured in annotations and provides important behavioral information for the agent to understand what to expect from the tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured. It uses only two sentences: the first states the purpose and return format, the second provides clear usage guidelines and sibling differentiation. Every word earns its place with no redundancy or fluff, and the most important information (what the tool does and when to use it) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with comprehensive annotations (read-only, non-destructive, idempotent) and 100% schema coverage, the description provides good contextual completeness. It explains the return format (structured JSON with scores and pagination) which isn't covered by annotations or an output schema, and gives clear sibling differentiation. The main gap is that without an output schema, more detail about the return structure would be helpful, but the description gives enough for basic understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already provides comprehensive parameter documentation. The description doesn't add significant semantic information about parameters beyond what's in the schema. It mentions 'precise filtering by type or tags' which aligns with the 'types' and 'tags' parameters, but this is essentially restating what the schema already documents well. The baseline score of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the Klever VM knowledge base for smart contract development context.' It specifies the exact resource (knowledge base) and domain (smart contract development), and explicitly distinguishes it from its sibling 'search_documentation' by stating 'use this for precise filtering by type or tags; use search_documentation for human-readable "how do I..." answers.' This provides clear differentiation between the two search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states 'Use this for precise filtering by type or tags; use search_documentation for human-readable "how do I..." answers.' This clearly defines the appropriate context for this tool (structured filtering) versus its sibling (human-readable answers), giving the agent clear decision criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_scA
Read-onlyIdempotent
Inspect

Execute a read-only query against a Klever smart contract (VM view call). Returns the contract function result as base64-encoded return data. Arguments must be base64-encoded. Use this to read contract state without modifying it.

ParametersJSON Schema
NameRequiredDescriptionDefault
argsNoOptional base64-encoded arguments. For addresses, encode the hex-decoded bech32 bytes. For numbers, use big-endian byte encoding.
callerNoOptional caller address (klv1... bech32 format). Some view functions use the caller to look up address-keyed storage mappers.
networkNoNetwork to query. Options: "mainnet", "testnet", "devnet", "local". Defaults to server default (mainnet).
funcNameYesFunction name to call (must be a #[view] function on the contract).
scAddressYesSmart contract address (klv1... bech32 format).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Strong value-add beyond annotations: While annotations cover safety (readOnlyHint, destructiveHint, idempotentHint), the description adds critical execution details including return format ('base64-encoded return data'), input encoding requirements ('Arguments must be base64-encoded'), and execution context ('VM view call'). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tightly constructed sentences with zero waste: (1) purpose/scope, (2) return format, (3) input requirements, (4) usage guidance. Logical progression and front-loaded with the most important information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for the tool's complexity. Despite no output schema, description specifies return format. Combined with excellent annotations (safety hints, idempotency, openWorldHint) and 100% schema coverage, the description provides sufficient context for successful invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for all 5 parameters. Description reinforces the base64 encoding requirement but does not add substantial semantic meaning beyond what the schema already documents. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent clarity: 'Execute a read-only query against a Klever smart contract (VM view call)' provides specific verb (execute/query), specific resource (Klever smart contract), and technical specificity (VM view call) that distinguishes it from general account queries or transaction tools in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context: 'Use this to read contract state without modifying it' establishes when to use (read-only operations) and implies when not to use (state modification). However, lacks explicit comparison to specific siblings like 'analyze_contract' that might overlap in functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_documentationA
Read-onlyIdempotent
Inspect

Search Klever VM documentation and knowledge base. Returns human-readable markdown with titles, descriptions, and code snippets. Optimized for "how do I..." questions. Use this instead of query_context when you need formatted developer documentation.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query in natural language (e.g. "how to use storage mappers", "deploy contract to testnet", "handle KDA token transfers").
categoryNoNarrow results to a specific knowledge category. Available: core, storage, events, tokens, modules, tools, scripts, examples, errors, best-practices, documentation.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true. The description adds valuable context about the return format ('human-readable markdown with titles, descriptions, and code snippets') and optimization for specific query types, which goes beyond the annotations. No contradictions exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first sentence defines purpose and output, the second provides usage guidelines and differentiation. It is front-loaded with essential information and efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with optional filtering), rich annotations, and 100% schema coverage, the description is largely complete. It explains purpose, output format, usage context, and sibling differentiation. The main gap is lack of output schema details, but annotations cover safety and idempotency, making this adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description does not add any parameter-specific details beyond what the schema provides, such as explaining how the query interacts with categories or formatting nuances. Baseline 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches 'Klever VM documentation and knowledge base' and returns 'human-readable markdown with titles, descriptions, and code snippets.' It specifically distinguishes from sibling 'query_context' by stating 'Use this instead of query_context when you need formatted developer documentation,' providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Optimized for "how do I..." questions' and 'Use this instead of query_context when you need formatted developer documentation.' This clearly defines when to use this tool versus the alternative sibling tool, with no ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.