Skip to main content
Glama

Noesis — Solana On-Chain Intelligence

Server Details

Solana on-chain intelligence — token scans, wallet profiling, bundle detection, 19 MCP tools.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Rengon0x/NoesisAPI
GitHub Stars
0
Server Listing
NoesisAPI

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 19 of 19 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes focused on specific aspects of Solana intelligence, such as token analysis, wallet profiling, and transaction parsing. However, some overlap exists between token_scan and other token-specific tools, as token_scan aggregates multiple metrics that are also available individually, which could cause confusion about when to use it versus specialized tools.

Naming Consistency5/5

Tool names follow a highly consistent snake_case pattern with clear prefixes indicating the resource type (e.g., token_, wallet_, chain_, transactions_). This predictable structure makes it easy to understand the tool's domain and purpose at a glance, enhancing usability for agents.

Tool Count4/5

With 19 tools, the count is slightly high but reasonable for the broad scope of Solana on-chain intelligence, covering tokens, wallets, transactions, and chain status. It provides comprehensive analysis capabilities without being overwhelming, though some consolidation might improve focus.

Completeness5/5

The toolset offers complete coverage for Solana intelligence, including token metadata, holder analysis, trader profiling, wallet history, transaction parsing, and chain status. There are no obvious gaps; agents can perform end-to-end investigations from basic token checks to deep wallet and transaction analysis.

Available Tools

19 tools
chain_statusAInspect

Current Solana chain status: current slot, block height, epoch number, slots remaining in epoch. Use to check if the network is healthy or to timestamp on-chain events. Light endpoint (no rate limit).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a 'light endpoint (no rate limit),' indicating performance characteristics and usage constraints. It also implies read-only behavior by describing status retrieval, though this isn't explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: the first sentence states the purpose and key data points, the second provides usage guidelines, and the third adds behavioral context. Every sentence adds essential information with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no annotations, no output schema), the description is nearly complete. It covers purpose, usage, and behavioral traits. A minor gap is the lack of explicit mention of read-only nature, but this is implied. It adequately compensates for the absence of structured fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so the baseline is 4. The description appropriately doesn't discuss parameters, as none exist, and instead focuses on what the tool returns (slot, block height, etc.), which is valuable context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific details: it retrieves 'current Solana chain status' including 'current slot, block height, epoch number, slots remaining in epoch.' It distinguishes itself from sibling tools by focusing on network-wide status rather than token, wallet, or transaction-specific data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use to check if the network is healthy or to timestamp on-chain events.' It provides clear use cases (health checking, timestamping) without needing to reference alternatives, as no sibling tools offer similar functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cross_holdersAInspect

Find wallets holding multiple specified tokens simultaneously. Provide 2-10 token addresses to detect insider wallets, coordinated holders, or team wallets operating across related tokens. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
tokensYesList of token mint addresses (2-10)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates key behavioral traits: the tool identifies wallet intersections across tokens, specifies the token input range (2-10), and importantly discloses the rate limit ('Heavy endpoint (rate-limited to 1 req/5sec per API key)'). This provides valuable operational context beyond what the input schema alone would convey.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by usage context and operational constraints. Every sentence earns its place with no wasted words, making it highly efficient while remaining informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does well by covering purpose, usage context, input constraints, and rate limiting. However, it doesn't describe what the output looks like (e.g., wallet addresses, holding amounts, timestamps) or any pagination/limitation behavior, which would be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already fully documents both parameters. The description adds marginal value by reinforcing the token count constraint (2-10) mentioned in the schema, but doesn't provide additional semantic context about parameter usage beyond what's already in the structured fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Find wallets holding multiple specified tokens simultaneously') and resource ('wallets'), distinguishing it from siblings like token_holders (which likely finds holders of a single token) or wallet_profile (which focuses on individual wallet analysis). It explicitly mentions the multi-token intersection use case.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to detect insider wallets, coordinated holders, or team wallets operating across related tokens'), which helps differentiate it from siblings. However, it doesn't explicitly state when NOT to use it or name specific alternative tools for related but different purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cross_tradersAInspect

Find traders who were active across multiple specified tokens, with profit breakdown per token. Provide 2-10 token addresses to detect smart money patterns, repeat winners, or coordinated trading groups. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
tokensYesList of token mint addresses (2-10)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully discloses important traits: the tool is 'heavy' (performance characteristic), has rate limits ('1 req/5sec per API key'), and returns 'profit breakdown per token' (output behavior). It doesn't mention error conditions or authentication requirements, but covers key operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences with zero waste. The first sentence states the core purpose and output, the second provides usage context and constraints. Every element serves a clear purpose, and critical information (like rate limits) is appropriately included.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does well by explaining the tool's purpose, usage context, and key behavioral constraints. It could be more complete by describing the output format in more detail (beyond 'profit breakdown per token') or mentioning error scenarios, but it provides sufficient context for basic understanding given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds some context about the 'tokens' parameter ('2-10 token addresses to detect patterns') but doesn't provide additional meaning beyond what the schema already states about parameter constraints and purposes. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Find traders', 'detect patterns') and resources ('tokens', 'profit breakdown'), distinguishing it from siblings like token_best_traders (single token focus) or token_holders (no profit analysis). It explicitly mentions 'active across multiple specified tokens' which differentiates its cross-token analysis capability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to detect smart money patterns, repeat winners, or coordinated trading groups') and specifies the required input range ('2-10 token addresses'). However, it doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools for different scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_best_tradersAInspect

Find the most profitable traders for a token, ranked by realized PnL. Returns buy/sell amounts, profit, and Twitter handles where available. Use to identify smart money that traded this token successfully. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesSolana or Base token mint address (base58 or 0x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the tool returns specific data (buy/sell amounts, profit, Twitter handles), identifies its purpose (finding profitable traders), and importantly discloses rate-limiting ('Heavy endpoint (rate-limited to 1 req/5sec per API key)'), which is crucial for operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with three concise sentences that each add value: the first states the core functionality, the second specifies the return data and purpose, and the third provides critical operational guidance (rate-limiting). There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is largely complete. It covers purpose, usage, return data, and rate-limiting. However, it lacks details on output format (e.g., structure of returned data) and potential limitations (e.g., data freshness or coverage), which could be helpful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with clear descriptions for both parameters (chain and address). The description does not add any additional meaning beyond what the schema provides, such as explaining how the address is used to find traders or detailing chain-specific nuances, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Find', 'ranked by') and resources ('most profitable traders for a token', 'realized PnL'), distinguishing it from siblings like token_holders or token_top_holders by focusing on trader profitability rather than holdings or general information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to identify smart money that traded this token successfully'), but it does not explicitly mention when not to use it or name specific alternatives among the sibling tools, such as token_holders for general holder data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_bundlesAInspect

Analyze bundle and bot activity on a token: bundler percentage, sniper count, fresh wallet rate, and dev/team holdings. Helps detect coordinated launches and insider manipulation. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesSolana or Base token mint address (base58 or 0x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a 'Heavy endpoint' with specific rate limiting ('rate-limited to 1 req/5sec per API key'). This provides crucial operational context about performance constraints that isn't captured elsewhere.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states what it analyzes, second states its purpose, third discloses operational constraints. Every sentence adds value with zero wasted words, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does well by explaining the analysis purpose and rate limiting. However, it doesn't describe the return format or what the analysis metrics actually mean, which would be helpful given the complexity of bundle/bot detection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters (chain and address). The description doesn't add any parameter-specific information beyond what the schema provides, maintaining the baseline score of 3 for good schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes token bundle and bot activity with specific metrics (bundler percentage, sniper count, fresh wallet rate, dev/team holdings) and its purpose to detect coordinated launches and insider manipulation. It uses specific verbs ('analyze', 'detect') and distinguishes from siblings by focusing on bundle/bot analysis rather than general token info or wallet tracking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Helps detect coordinated launches and insider manipulation'), which implicitly suggests it's for suspicious activity analysis rather than general token data. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the many sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_dev_profileAInspect

Profile the creator/developer of a token: their wallet PnL, all other tokens they created (with outcomes), and their funding source. Essential for checking if a dev is a serial rugger. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesSolana or Base token mint address (base58 or 0x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing critical behavioral traits: it's a 'Heavy endpoint' with specific rate limiting ('rate-limited to 1 req/5sec per API key'). However, it doesn't mention authentication requirements, error conditions, or response format details that would be helpful for a tool with no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose statement, usage context, and operational constraints. Each sentence earns its place by providing essential information without redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does well by explaining the tool's investigative purpose and rate limiting. However, it could better address the lack of output schema by hinting at the response structure or data format, given the complexity implied by profiling multiple aspects of a developer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters. The description adds no additional parameter information beyond what's in the schema, so it meets the baseline expectation but doesn't provide extra value regarding parameter usage or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Profile the creator/developer of a token') and the comprehensive scope of information returned (wallet PnL, other tokens created with outcomes, funding source). It explicitly distinguishes this from sibling tools by focusing on developer profiling rather than token data, holder analysis, or wallet history.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Essential for checking if a dev is a serial rugger'), establishing a clear investigative context. It also implicitly distinguishes from siblings by focusing on developer risk assessment rather than token metrics or wallet analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_early_buyersAInspect

Find wallets that bought a token within the first N hours of creation (default 1 hour). Returns SOL spent, token amount received, and transaction signature for each buyer. Useful for identifying insiders or coordinated early accumulation. Solana only. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
hoursNoTime window in hours from token creation (0.1-24.0, default 1.0)
addressYesToken mint address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively discloses key traits: the heavy endpoint nature, rate limits ('rate-limited to 1 req/5sec per API key'), and the Solana-only restriction. However, it does not mention potential errors, pagination, or response format details, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with every sentence earning its place: it states the purpose, return values, use case, limitations, and rate limits in four concise sentences. There is no redundant or verbose language, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (3 parameters, no output schema, no annotations), the description is mostly complete: it covers purpose, usage, behavioral traits, and limitations. However, it lacks details on the output structure (e.g., format of returned data) and does not fully address all potential edge cases, leaving minor gaps in context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents all parameters (chain, hours, address). The description adds some context by mentioning the default hour value (1 hour) and implying the address is for token creation, but it does not provide significant additional meaning beyond what the schema specifies, such as format details or usage examples for the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Find wallets that bought a token') and resources ('within the first N hours of creation'), distinguishing it from siblings like token_holders or token_top_holders by focusing on early buyers rather than current holders. It explicitly mentions the Solana-only limitation and the specific data returned (SOL spent, token amount, transaction signature).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Useful for identifying insiders or coordinated early accumulation') and mentions the Solana-only restriction, but it does not explicitly state when not to use it or name specific alternative tools among the siblings. The guidance is helpful but lacks explicit comparisons to alternatives like token_fresh_wallets or token_entry_price.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_entry_priceAInspect

Entry prices for current token holders: what price each holder bought at vs current price, showing unrealized PnL. Use to gauge holder conviction and potential sell pressure. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesSolana or Base token mint address (base58 or 0x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a 'heavy endpoint' with specific rate limiting ('rate-limited to 1 req/5sec per API key'). It doesn't mention authentication requirements or data freshness, but covers the most critical operational constraint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded: three sentences with zero waste. The first sentence explains the core functionality, the second provides usage context, and the third discloses critical behavioral constraints. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, 100% schema coverage, but no annotations or output schema, the description does well. It explains the tool's purpose, usage context, and critical rate limiting. It could mention response format or pagination given no output schema, but covers the essentials for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate when the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs and resources: 'Entry prices for current token holders: what price each holder bought at vs current price, showing unrealized PnL.' It distinguishes from siblings by focusing on entry price analysis rather than general holder information (like token_holders) or trading patterns (like token_best_traders).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Use to gauge holder conviction and potential sell pressure.' However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, though the purpose differentiation implies alternatives exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_fresh_walletsAInspect

Detect fresh/new wallets holding a token with age category (hours/days/weeks old), token balance, percentage of supply, and funding source. Fresh wallets often indicate sybil attacks or coordinated buying. Solana only. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesSolana or Base token mint address (base58 or 0x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing important behavioral traits: 'Heavy endpoint (rate-limited to 1 req/5sec per API key)' and 'Solana only.' It also explains the significance of results ('often indicate sybil attacks or coordinated buying'), though it doesn't detail response format or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states purpose and output details, second provides usage context, third gives technical constraints. Every sentence adds value with zero wasted words, and critical information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does well by covering purpose, usage context, behavioral constraints, and technical limitations. However, it doesn't describe the response format or structure, which would be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema descriptions, but it does provide important context about the tool's focus on Solana tokens, which relates to the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Detect fresh/new wallets holding a token' with specific details about what information is provided (age category, balance, percentage of supply, funding source). It distinguishes itself from sibling tools like token_holders or token_top_holders by focusing specifically on 'fresh' wallets rather than general holder analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when this tool is relevant: 'Fresh wallets often indicate sybil attacks or coordinated buying' and specifies 'Solana only.' However, it doesn't explicitly mention when NOT to use it or name specific alternative tools for different use cases, though the sibling list suggests alternatives like token_holders for general analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_holdersAInspect

Paginated list of token holders with token account address, owner wallet, raw token amount, and escrow flag. Use for raw holder data; for enriched holder analysis use token_top_holders instead. Solana only. Light endpoint (no rate limit).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
limitNoNumber of holders to return (1-1000, default 100)
cursorNoPagination cursor from previous response (omit for first page)
addressYesToken mint address (base58 for Solana)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing pagination behavior ('Paginated list'), platform specificity ('Solana only'), performance characteristics ('Light endpoint (no rate limit)'), and data format ('raw holder data'). It doesn't mention authentication requirements or error conditions, but covers most essential behavioral aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with zero waste - three concise sentences that each earn their place: first states purpose and data fields, second provides usage differentiation, third specifies platform and performance characteristics. It's front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only list tool with 100% schema coverage but no output schema, the description provides excellent context about pagination, data format, platform constraints, performance, and sibling differentiation. The main gap is lack of output format details, but given this is a straightforward list operation, the description is nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all 4 parameters. The description adds minimal parameter-specific context beyond what's in the schema - it mentions 'Solana only' which relates to the chain parameter, but doesn't provide additional semantic meaning for individual parameters. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Paginated list of token holders') and resource ('token holders'), listing the exact data fields returned. It explicitly distinguishes from sibling 'token_top_holders' by stating this is for 'raw holder data' versus 'enriched holder analysis'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('for raw holder data') versus when to use an alternative ('for enriched holder analysis use token_top_holders instead'). It also specifies platform constraints ('Solana only') and performance characteristics ('Light endpoint').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_infoAInspect

On-chain token metadata from Metaplex DAS: name, symbol, description, image, decimals, mint/freeze/update authorities, total supply, and current price. Use for basic token identification. Solana only. Light endpoint (no rate limit).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesSolana or Base token mint address (base58 or 0x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates key behavioral traits: 'Light endpoint (no rate limit)' informs about performance characteristics, and 'On-chain token metadata from Metaplex DAS' clarifies the data source. It doesn't mention authentication requirements or potential limitations beyond rate limits, but provides useful operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded and concise. Every sentence earns its place: the first sentence defines purpose and scope, the second provides usage guidance, and the third adds operational context. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is reasonably complete. It covers purpose, scope, usage context, and operational characteristics. However, without an output schema, it could benefit from mentioning what format the metadata returns in, though the listed fields provide some indication.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (like explaining address formats or chain options). This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('On-chain token metadata from Metaplex DAS') and resources ('name, symbol, description, image, decimals, mint/freeze/update authorities, total supply, and current price'). It distinguishes from siblings by specifying 'basic token identification' and 'Solana only', differentiating from tools like token_holders or token_top_holders which focus on holder data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Use for basic token identification') and specifies 'Solana only', which implicitly excludes alternative chains. However, it doesn't explicitly state when not to use it or name specific alternatives among the sibling tools for more advanced token analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_previewAInspect

Quick token overview returning current price, market cap, liquidity, 24h volume, holder count, and social links. Use this for a fast first look at any token before deeper analysis. Light endpoint (no rate limit).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesSolana or Base token mint address (base58 or 0x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and adds valuable behavioral context: it discloses that this is a 'light endpoint (no rate limit)', which is crucial for usage planning, and implies it returns a summary rather than exhaustive data. However, it does not detail error conditions or response formats.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded: the first sentence defines the purpose and outputs, the second provides usage guidance, and the third adds behavioral context. Every sentence earns its place without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is mostly complete: it covers purpose, usage, and key behavioral traits. However, it lacks details on output structure or error handling, which would be helpful for an agent to interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting both parameters (chain and address). The description does not add any parameter-specific semantics beyond what the schema provides, such as examples or constraints, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('returning') and resources ('current price, market cap, liquidity, 24h volume, holder count, and social links'), distinguishing it from siblings like token_info or token_scan by emphasizing it's a 'quick overview' and 'fast first look' rather than deep analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('for a fast first look at any token before deeper analysis'), but it does not explicitly state when not to use it or name specific alternatives among the sibling tools, such as token_info for more detailed data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_scanAInspect

Comprehensive token scan combining top traders, security flags (mint/freeze authority, honeypot risk), holder quality metrics, DEX pair data, and bundle/bot statistics in a single call. Returns a large JSON payload. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesSolana or Base token mint address (base58 or 0x)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does an excellent job disclosing key behavioral traits: it returns 'a large JSON payload', is a 'heavy endpoint', and has specific rate limiting ('rate-limited to 1 req/5sec per API key'). These are crucial operational details not inferable from the input schema alone.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured: first sentence establishes purpose and scope, second describes output characteristics, third provides critical operational constraints. Every sentence earns its place with zero wasted words, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does an excellent job covering behavioral aspects (rate limiting, payload size) and purpose. The main gap is lack of output format details beyond 'large JSON payload', but given the comprehensive nature described, this is a minor omission.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing good documentation for both parameters. The description doesn't add any parameter-specific information beyond what's in the schema, but doesn't need to since schema coverage is comprehensive. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a 'comprehensive token scan' that combines multiple specific analyses (top traders, security flags, holder quality metrics, DEX pair data, bundle/bot statistics) in a single call. It distinguishes itself from sibling tools like token_best_traders, token_holders, or token_info by emphasizing this comprehensive multi-faceted approach rather than focusing on individual aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool: when needing a comprehensive analysis combining multiple data points in one call. It doesn't explicitly state when NOT to use it or name specific alternatives, but the comprehensive nature implies it's for broad analysis rather than targeted queries that sibling tools might handle better.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_team_supplyAInspect

Identify team/insider wallets holding a token: categorized as Fresh, Inactive, or Linked wallets with balance, supply percentage, and funding source. Detects coordinated supply control. Solana only. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesSolana or Base token mint address (base58 or 0x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and effectively discloses key behavioral traits: it specifies the blockchain limitation ('Solana only'), rate limits ('rate-limited to 1 req/5sec per API key'), and the heavy nature of the endpoint, which are crucial for an agent to use it correctly without annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with every sentence earning its place: the first sentence states the core purpose, the second adds critical behavioral details (blockchain and rate limits), and there is no redundant or unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (analyzing wallets with categorization and detection), no annotations, and no output schema, the description is fairly complete by covering purpose, constraints, and behavioral traits. However, it could benefit from more details on output format or error handling to fully compensate for the lack of structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents both parameters (chain and address) adequately. The description does not add any specific meaning beyond the schema, such as explaining the address format or chain implications, but it implies the address is for a token mint, which aligns with the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('identify', 'categorized', 'detects') and resources ('team/insider wallets holding a token'), distinguishing it from siblings like token_holders or token_top_holders by focusing on categorization and coordinated supply detection rather than just listing holders.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Solana only', 'Heavy endpoint') and implies usage for analyzing token supply control, but does not explicitly state when not to use it or name specific alternatives among siblings like token_holders or token_top_holders for simpler needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_top_holdersAInspect

Top holders of a token enriched with realized PnL, profit stats, winrate, and wallet funding source. Use to assess holder quality and detect smart money positions. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesSolana or Base token mint address (base58 or 0x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively discloses key traits: it's a 'Heavy endpoint' with rate limits ('rate-limited to 1 req/5sec per API key'), which is crucial for usage planning. However, it doesn't specify other behavioral aspects like error handling, response format, or data freshness, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage context and rate limit warning. Every sentence earns its place with no wasted words, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (enriched data, rate limits) and lack of annotations and output schema, the description does a good job covering key aspects like purpose, usage, and behavioral constraints. However, it could be more complete by mentioning response structure or data limitations, as the output schema is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('chain' and 'address') with descriptions. The description does not add any meaning beyond the schema, such as explaining parameter interactions or providing examples. Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Top holders of a token enriched with...') and distinguishes it from sibling tools like 'token_holders' by emphasizing enrichment features (realized PnL, profit stats, winrate, wallet funding source) and analytical purposes (assess holder quality, detect smart money positions). It uses precise verbs and resources without tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('assess holder quality and detect smart money positions'), but it does not explicitly state when not to use it or name alternatives among sibling tools. For example, it doesn't compare to 'token_holders' or other related tools, though the enrichment features imply a more advanced use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transactions_parseAInspect

Parse Solana transaction signatures into decoded, human-readable data: token transfers, SOL movements, program interactions, and instruction details. Provide up to 100 transaction signatures. Solana only. Light endpoint (no rate limit).

ParametersJSON Schema
NameRequiredDescriptionDefault
transactionsYesList of transaction signatures to parse (max 100). Returns decoded instruction data, token transfers, and program interactions.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and adds valuable behavioral context: it specifies 'Light endpoint (no rate limit)', which informs about performance and constraints, and mentions the max 100 transactions limit. However, it lacks details on error handling, response format, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first clause, followed by specific details and constraints in a single, efficient sentence. Every phrase adds value without redundancy, making it appropriately sized and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter, no output schema, no annotations), the description is mostly complete: it covers purpose, constraints (max 100, Solana only), and behavioral traits (light endpoint). However, without an output schema, it could benefit from hinting at the return format, though the decoding details are implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'transactions' parameter fully. The description adds marginal value by reiterating 'Provide up to 100 transaction signatures' and the parsing outcome, but does not provide additional syntax or format details beyond what the schema states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb 'parse' and resource 'Solana transaction signatures', with detailed scope 'into decoded, human-readable data: token transfers, SOL movements, program interactions, and instruction details'. It distinguishes from siblings by specifying 'Solana only' and 'Light endpoint', unlike other tools focused on tokens, wallets, or chains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for parsing Solana transactions (specifically 'Solana only') with up to 100 signatures. It implies usage for decoding rather than other operations like checking status or analyzing wallets, but does not explicitly state when not to use it or name alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

wallet_connectionsAInspect

Map SOL transfer connections for a wallet: who sent/received SOL, transfer amounts, and frequency. Reveals funding networks and wallet clusters. Solana only. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesWallet address
min_solNoMinimum SOL transfer to include (default 0.1)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively discloses key traits: it's a read operation (implied by 'Map'), specifies platform limitation ('Solana only'), and warns about performance constraints ('Heavy endpoint' with rate limits). This covers critical aspects like scope, platform specificity, and rate limiting, though it could add more on error handling or response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured, using two sentences that front-load the core purpose and efficiently cover scope, outputs, platform, and rate limits. Every sentence earns its place with no wasted words, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (analysis tool with no output schema and no annotations), the description is fairly complete: it explains what the tool does, its scope, outputs, platform, and rate limits. However, it lacks details on return values (e.g., format of mapped connections) and could benefit from more on error cases or prerequisites, slightly reducing completeness for a tool of this nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('address' and 'min_sol'). The description adds some context by mentioning 'transfer amounts' and 'frequency', which relate to the parameters, but does not provide additional syntax, format details, or examples beyond what the schema offers. The baseline score of 3 reflects adequate but minimal added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Map SOL transfer connections') and resources ('for a wallet'), detailing what it reveals ('who sent/received SOL, transfer amounts, and frequency', 'funding networks and wallet clusters'). It distinguishes from siblings by specifying 'Solana only' and focusing on connection mapping rather than general wallet history or profile analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Map SOL transfer connections for a wallet') and includes a technical constraint ('Heavy endpoint (rate-limited to 1 req/5sec per API key)') that informs usage decisions. However, it does not explicitly state when not to use it or name specific alternatives among sibling tools, though the context implies it's for connection analysis rather than other wallet functions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

wallet_historyAInspect

Wallet transaction history from Helius Enhanced Transactions API. Returns decoded transactions with token transfers, program interactions, and human-readable descriptions. Supports filtering by transaction type and source protocol. Solana only. Light endpoint (no rate limit).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of transactions to return (1-100, default 20)
beforeNoPagination: transaction signature to start before (omit for most recent)
sourceNoFilter by source protocol (e.g. "RAYDIUM", "JUPITER"). Omit for all sources.
addressYesWallet address (base58 for Solana)
tx_typeNoFilter by transaction type (e.g. "SWAP", "TRANSFER", "NFT_SALE"). Omit for all types.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and adds valuable behavioral context beyond the input schema. It discloses that the tool is a 'light endpoint (no rate limit)', which is crucial for usage planning, and describes the return content ('decoded transactions with token transfers, program interactions, and human-readable descriptions'), giving insight into output behavior not covered by the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with every sentence earning its place. It starts with the core purpose, adds details on returns and filtering, specifies constraints ('Solana only'), and ends with operational notes ('light endpoint'), all in a compact format with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, no output schema, no annotations), the description is largely complete. It covers purpose, return content, filtering capabilities, constraints, and rate limit behavior. However, it could improve by mentioning pagination details (implied by 'before' parameter) or error handling, slightly reducing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, providing detailed parameter documentation. The description adds minimal semantic value beyond the schema, mentioning filtering by 'transaction type and source protocol' which is already covered in the schema for tx_type and source. Baseline score of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('returns decoded transactions') and resources ('wallet transaction history'), distinguishing it from sibling tools like wallet_profile or wallet_connections by focusing on transaction history rather than profile or connection data. It specifies the data source ('Helius Enhanced Transactions API') and scope ('Solana only').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('wallet transaction history') and implicitly distinguishes it from siblings by not overlapping with their functions (e.g., token_info, wallet_profile). However, it lacks explicit guidance on when not to use it or named alternatives for similar queries, such as comparing to transactions_parse for parsing individual transactions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

wallet_profileAInspect

Complete wallet profile: SOL balance, realized PnL, 7-day and 30-day profit stats, winrate, wallet funding source, and known labels/tags. Use to assess whether a wallet is smart money, a bot, or a fresh sybil. Heavy endpoint (rate-limited to 1 req/5sec per API key).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: "sol" (default) or "base"sol
addressYesWallet address (base58 for Solana, 0x for Base)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates that this is a 'heavy endpoint' with specific rate limiting ('rate-limited to 1 req/5sec per API key'), which is crucial operational context not available elsewhere. It doesn't mention error conditions or response format, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first lists all data returned, the second provides usage context and rate limiting. Every element serves a purpose with zero wasted words, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does well by specifying the comprehensive data returned and important behavioral constraints. However, it doesn't describe the response format or structure, which would be helpful given the absence of an output schema. The rate limiting disclosure partially compensates for missing annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3. No compensation is needed since schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves comprehensive wallet profile data including SOL balance, PnL, profit stats, winrate, funding source, and labels/tags. It specifies the exact data points returned and distinguishes this from sibling tools like wallet_history or wallet_connections by focusing on profile assessment rather than transaction history or network connections.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'to assess whether a wallet is smart money, a bot, or a fresh sybil.' This provides clear context for application and distinguishes it from sibling tools that serve different purposes like token_info or token_holders.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.