Noesis
Server Details
Solana on-chain intelligence for AI agents. 13 tools for token analysis, wallet profiling, bundle & sniper detection, fresh-wallet clustering, dev profiling, cross-token patterns, and live PumpFun/Raydium/Meteora event streams.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 20 of 20 tools scored. Lowest: 2.5/5.
Most tools have distinct purposes, but some overlap exists: token_holders and token_top_holders both provide holder data, with the latter being enriched, which could cause confusion. Similarly, token_info and token_preview both offer token overviews, though preview is more market-focused. Descriptions help differentiate, but careful reading is needed to avoid misselection.
Tools follow a consistent snake_case pattern with clear verb_noun structures (e.g., chain_fees, token_info, wallet_profile). Minor deviations include cross_holders and cross_traders using 'cross' as a prefix instead of a verb, but overall naming is predictable and readable across the set.
With 20 tools, the count is slightly high but reasonable for a comprehensive Solana analytics server covering tokens, wallets, and chain data. It avoids being overwhelming by grouping related tools (e.g., token_* for token analysis), though some consolidation might improve focus without harming functionality.
The toolset provides extensive coverage for Solana analytics, including chain status, token metadata, holder/trader analysis, wallet profiling, and transaction parsing. It supports full workflows from basic token checks to deep investigations, with no obvious gaps for its domain, ensuring agents can perform complex analyses without dead ends.
Available Tools
20 toolschain_feesAInspect
Estimate current Solana priority fees at different percentile levels (min, low, medium, high, very high, unsafe max) in microLamports. Optionally scope to specific accounts (e.g. a program ID) for more accurate estimates. Use when building transactions that need priority fee guidance. Light endpoint (no rate limit).
| Name | Required | Description | Default |
|---|---|---|---|
| accounts | No | Comma-separated Solana account pubkeys to scope the fee estimate (e.g. a program ID or token mint). Omit for global estimate. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses key behavioral traits: it's a read-only estimation tool ('Estimate'), mentions it's a 'Light endpoint (no rate limit)', and clarifies the output format ('in microLamports'). However, it doesn't detail error conditions or response structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by optional features and usage context, all in three efficient sentences with zero wasted words. Each sentence adds distinct value: what it does, parameter semantics, when to use it, and behavioral notes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (fee estimation with one optional parameter), no annotations, and no output schema, the description is largely complete: it covers purpose, usage, parameters, and some behavior. However, it lacks details on output format (beyond 'microLamports') and potential errors, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the optional parameter's purpose ('Optionally scope to specific accounts... for more accurate estimates') and providing an example ('e.g., a program ID'), enhancing understanding beyond the schema's technical definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Estimate current Solana priority fees') and resource ('at different percentile levels'), with precise scope ('in microLamports'). It distinguishes from siblings by focusing on fee estimation rather than status, holders, traders, tokens, or wallet operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use the tool ('Use when building transactions that need priority fee guidance') and provides an optional parameter for scoping to specific accounts for more accurate estimates, offering clear context for application.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chain_statusBInspect
Current Solana chain status: current slot, block height, epoch number, slots remaining in epoch. Use to check if the network is healthy or to timestamp on-chain events. Light endpoint (no rate limit).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states it's a 'Light endpoint,' hinting at performance or simplicity, but doesn't cover critical aspects like read-only nature, rate limits, error handling, or response format. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: one sentence that efficiently conveys the core purpose and key details (data points and 'Light endpoint'). Every word earns its place, with no wasted information or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers what data is retrieved but lacks details on behavioral traits (e.g., safety, performance) and output format. For a read-only status tool, this is minimally viable but leaves room for improvement in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add param info, which is appropriate here. Baseline is 4 for zero parameters, as the schema fully handles the lack of inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving 'Current Solana chain status' with specific data points (slot, block height, epoch info) and notes it's a 'Light endpoint.' This is a specific verb+resource combination, though it doesn't explicitly differentiate from sibling tools (e.g., by contrasting with other chain-related tools, which are absent in the sibling list).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions it's a 'Light endpoint,' which implies efficiency but doesn't specify contexts, exclusions, or comparisons to other tools. Without explicit usage instructions, the agent lacks direction for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cross_holdersAInspect
Find wallets holding multiple specified tokens simultaneously. Provide 2-10 token addresses to detect insider wallets, coordinated holders, or team wallets operating across related tokens. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| tokens | Yes | List of token mint addresses (2-10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and adds valuable behavioral context: it discloses the tool's purpose (detecting insider/coordinated wallets), performance characteristic ('Heavy endpoint' implying potential rate limits or high resource usage), and input constraints ('2-10' tokens). It doesn't cover output format or error conditions, but provides meaningful operational insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with zero waste: first states core functionality, second adds use case context, third provides critical performance warning. Every sentence earns its place and information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides adequate purpose and behavioral context but lacks details about return values, error handling, or specific performance implications of 'Heavy endpoint'. It covers the basics but leaves gaps an agent would need to handle.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema (e.g., doesn't explain token format details or chain implications), meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Find wallets', 'Detects') and resources ('multiple specified tokens', 'insider wallets or coordinated holders'), distinguishing it from siblings like token_top_holders (single token focus) or wallet_profile (individual wallet analysis).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for detecting insider/coordinated activity and mentions it's a 'Heavy endpoint', providing some context. However, it lacks explicit guidance on when to use this versus alternatives like cross_traders or token_bundles, and doesn't specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cross_tradersCInspect
Find traders who were active across multiple specified tokens, with profit breakdown per token. Provide 2-10 token addresses to detect smart money patterns, repeat winners, or coordinated trading groups. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| tokens | Yes | List of token mint addresses (2-10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Heavy endpoint,' which hints at potential performance or rate limit considerations, but doesn't cover other critical aspects like required permissions, data freshness, error handling, or output format. For a tool with no annotation coverage, this is a significant gap, as it leaves key behavioral traits unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with three short sentences that efficiently convey purpose, context, and a warning. Each sentence adds value: the first states the core function, the second adds analytical context, and the third provides a performance note. There's no wasted text, making it appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (analyzing traders across tokens with profit metrics), no annotations, and no output schema, the description is incomplete. It lacks details on output format, error cases, prerequisites, or how 'profit per token' is calculated. While concise, it doesn't provide enough context for an agent to fully understand the tool's behavior and limitations, especially for a data analysis tool with no structured output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description doesn't add meaning beyond the input schema, which has 100% coverage with clear descriptions for both parameters ('chain' and 'tokens'). The schema already explains 'chain' as 'sol' or 'base' and 'tokens' as a list of mint addresses (2-10). Since schema coverage is high, the baseline score is 3, as the description doesn't compensate with additional param details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find traders active across multiple tokens with profit per token.' It specifies the verb ('find'), resource ('traders'), and scope ('across multiple tokens with profit per token'), which is specific and actionable. However, it doesn't explicitly differentiate from sibling tools like 'cross_holders' or 'token_best_traders', which might have overlapping functions, so it doesn't reach a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance: 'Detects smart money patterns' implies use for analyzing profitable trading behaviors, and 'Heavy endpoint' suggests it might be resource-intensive. However, it lacks explicit when-to-use rules, alternatives (e.g., vs. 'token_best_traders'), or exclusions. This leaves the agent with vague context, scoring low due to insufficient actionable guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_best_tradersCInspect
Find the most profitable traders for a token, ranked by realized PnL. Returns buy/sell amounts, profit, and Twitter handles where available. Use to identify smart money that traded this token successfully. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Solana or Base token mint address (base58 or 0x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Heavy endpoint,' hinting at potential performance or rate limit concerns, but doesn't detail authentication needs, rate limits, error conditions, or what 'heavy' entails. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the core purpose in a single sentence. The additional note 'Heavy endpoint' adds useful context without unnecessary verbosity. However, it could be slightly more structured by separating usage notes from the main description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of retrieving trader data with PnL and social handles, no annotations, and no output schema, the description is incomplete. It lacks details on response format, pagination, error handling, and the meaning of 'profitable' (e.g., time frame, metrics). The 'Heavy endpoint' note is insufficient to compensate for these gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (chain and address). The description doesn't add any parameter-specific information beyond what the schema provides, such as format examples or constraints. Since the schema fully documents parameters, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to retrieve 'most profitable traders for a token with PnL, buy/sell amounts, twitter handles.' It specifies the resource (traders for a token) and key data points. However, it doesn't explicitly differentiate from sibling tools like 'cross_traders' or 'token_top_holders,' which might have overlapping functionality, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It mentions 'Heavy endpoint,' which implies potential performance considerations, but doesn't specify when to use this tool versus alternatives like 'cross_traders' or 'token_top_holders.' No explicit when-to-use, when-not-to-use, or prerequisite information is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_bundlesCInspect
Analyze bundle and bot activity on a token: bundler percentage, sniper count, fresh wallet rate, and dev/team holdings. Helps detect coordinated launches and insider manipulation. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Solana or Base token mint address (base58 or 0x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions 'Heavy endpoint,' hinting at potential performance or rate limit considerations, but doesn't disclose specific behavioral traits like whether it's read-only, destructive, requires authentication, or has rate limits. The description is too vague to adequately inform an agent about how the tool behaves beyond its basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but not optimally structured. It lists metrics in a comma-separated format and ends with an ambiguous phrase ('Heavy endpoint'). While concise, it could be more front-loaded with a clearer purpose statement. The sentence doesn't fully earn its place due to vagueness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It lacks details on what the analysis returns, how the metrics are calculated, or any behavioral context. For a tool with 2 parameters and complex-sounding analysis, this leaves significant gaps for an AI agent to understand its full use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (chain and address). The description doesn't add any meaning beyond what the schema provides—it doesn't explain how the address parameter relates to the analysis or clarify the chain default. Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool provides 'bundle and bot activity analysis' with specific metrics (bundler %, sniper count, fresh wallet rate, dev team holdings), which gives a general purpose. However, it doesn't clearly distinguish this from sibling tools like token_scan or token_preview that might also analyze token activity. The phrase 'Heavy endpoint' is ambiguous rather than clarifying.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description mentions it's a 'Heavy endpoint,' which might imply performance considerations, but doesn't specify when to choose it over similar tools like token_scan or token_dev_profile. There's no mention of prerequisites, alternatives, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_dev_profileCInspect
Profile the creator/developer of a token: their wallet PnL, all other tokens they created (with outcomes), and their funding source. Essential for checking if a dev is a serial rugger. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Solana or Base token mint address (base58 or 0x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that this is a 'Heavy endpoint', hinting at potential performance or rate limit considerations, but lacks details on authentication needs, data freshness, or error handling. For a tool with no annotations, this is insufficient to fully understand its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with key information (purpose and endpoint type). It uses two concise sentences without unnecessary details. However, the phrase 'Heavy endpoint' could be more specific, and the structure could benefit from slightly more elaboration to improve clarity without losing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of retrieving a developer profile with financial data, no annotations, and no output schema, the description is incomplete. It doesn't explain what the output includes (e.g., format of PnL, list of tokens), potential limitations, or how to interpret results. This leaves significant gaps for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with clear documentation for both parameters (chain and address). The description doesn't add any additional meaning beyond the schema, such as explaining parameter interactions or examples. Given the high schema coverage, a baseline score of 3 is appropriate, as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves a creator/developer profile with PnL, tokens created, and funding source, which provides a general purpose. However, it's vague about the specific action (e.g., 'retrieve' or 'fetch') and doesn't clearly distinguish it from sibling tools like 'wallet_profile' or 'token_preview', which might overlap in functionality. The phrase 'Heavy endpoint' adds context but doesn't refine the core purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives. The description mentions it's a 'Heavy endpoint', which implies it might be resource-intensive, but it doesn't specify use cases, prerequisites, or recommend other tools for different scenarios. Without this, users must infer usage from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_early_buyersBInspect
Find wallets that bought a token within the first N hours of creation (default 1 hour). Returns SOL spent, token amount received, and transaction signature for each buyer. Useful for identifying insiders or coordinated early accumulation. Solana only. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| hours | No | Time window in hours from token creation (0.1-24.0, default 1.0) | |
| address | Yes | Token mint address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It reveals that this is a read operation (implied by listing data), specifies 'Solana only' as a platform constraint, and warns 'Heavy endpoint' about performance characteristics. However, it doesn't mention rate limits, authentication requirements, data freshness, or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise at three short sentences. It's front-loaded with the core purpose, followed by data fields, platform constraint, and performance warning. Each sentence earns its place, though the structure could be slightly improved by explicitly stating the action verb first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, 100% schema coverage, no annotations, and no output schema, the description provides adequate but incomplete context. It covers the basic purpose and some behavioral aspects (platform constraint, performance warning), but lacks details about return format, error conditions, or how the 'heavy endpoint' warning should affect usage decisions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all three parameters. The description mentions 'within N hours of creation' which aligns with the 'hours' parameter, and 'token mint address' aligns with the 'address' parameter, but adds no additional semantic context beyond what's in the schema. The 'Solana only' comment relates to the 'chain' parameter but doesn't add value beyond the schema's enum description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to retrieve early buyers of a token within a specific time window, listing SOL spent, token amount, and wallet addresses. It specifies 'Solana only' which distinguishes it from potential cross-chain alternatives, though it doesn't explicitly differentiate from sibling tools like token_top_holders or token_fresh_wallets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'within N hours of creation' and 'Solana only', but doesn't explicitly state when to use this tool versus alternatives like token_top_holders or token_fresh_wallets. The 'Heavy endpoint' warning provides some guidance about performance considerations, but no explicit alternatives or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_entry_priceAInspect
Entry prices for current token holders: what price each holder bought at vs current price, showing unrealized PnL. Use to gauge holder conviction and potential sell pressure. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Solana or Base token mint address (base58 or 0x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a 'Heavy endpoint' with specific rate limits ('rate-limited to 1 req/5sec per API key'). This provides crucial operational context that wouldn't be available from annotations alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exceptionally concise and well-structured in just three sentences. Each sentence serves a distinct purpose: stating the tool's function, explaining its application, and warning about operational constraints. There is zero wasted verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description provides good contextual coverage. It explains the tool's purpose, usage context, and operational constraints. The main gap is the lack of output format details, but this is partially compensated by the clear purpose statement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (chain and address). The description doesn't add any parameter-specific information beyond what's in the schema. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs and resources: 'Entry prices for current token holders: what price each holder bought at vs current price, showing unrealized PnL.' It distinguishes from sibling tools like token_holders or token_top_holders by focusing on entry prices and PnL analysis rather than just holder lists or rankings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'Use to gauge holder conviction and potential sell pressure.' It implies usage for analyzing holder behavior and market sentiment. However, it does not explicitly state when not to use it or name specific alternatives among the many sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_fresh_walletsBInspect
Detect fresh/new wallets holding a token with age category (hours/days/weeks old), token balance, percentage of supply, and funding source. Fresh wallets often indicate sybil attacks or coordinated buying. Solana only. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Solana or Base token mint address (base58 or 0x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is a 'Heavy endpoint,' which is useful context about performance/resource usage, and specifies 'Solana only' for chain compatibility. However, it doesn't describe what 'fresh/new' means operationally (e.g., time thresholds), the return format, error conditions, or rate limits, leaving gaps in behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the core purpose in the first phrase. The two additional phrases ('Solana only' and 'Heavy endpoint') add necessary constraints and warnings efficiently. No wasted sentences, though it could be slightly more structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic purpose and constraints but lacks details on behavioral aspects like return values, error handling, or operational definitions (e.g., 'fresh/new'). It's minimally adequate for a tool with 2 parameters and high schema coverage, but could be more complete to compensate for missing structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('chain' and 'address') with descriptions. The description adds no additional parameter semantics beyond what's in the schema, such as clarifying 'address' format or 'chain' implications. Baseline 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: detecting fresh/new wallets holding a specific token, with specific attributes (age category, balance, funding source). It specifies 'Solana only' which distinguishes it from potential cross-chain alternatives, though it doesn't explicitly differentiate from sibling tools like 'token_early_buyers' or 'token_top_holders' that might overlap in analyzing token holders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'Solana only' and noting it's a 'Heavy endpoint,' suggesting it should be used for Solana tokens and with caution due to resource intensity. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'token_early_buyers' or 'token_top_holders,' nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_holdersAInspect
Paginated list of token holders with token account address, owner wallet, raw token amount, and escrow flag. Use for raw holder data; for enriched holder analysis use token_top_holders instead. Solana only. Light endpoint (no rate limit).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| limit | No | Number of holders to return (1-1000, default 100) | |
| cursor | No | Pagination cursor from previous response (omit for first page) | |
| address | Yes | Token mint address (base58 for Solana) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a paginated list (implying multiple pages), specifies the data fields returned, indicates it's a 'Light endpoint (no rate limit)' which is valuable operational context, and notes platform restriction ('Solana only'). It doesn't mention error conditions or authentication requirements, but covers most essential behavioral aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: first states purpose and data fields, second provides usage guidance and alternative, third specifies constraints and performance. Every sentence adds value with zero waste, making it appropriately sized and front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides good coverage for a read-only list tool. It explains what data is returned, pagination behavior, usage context, platform restriction, and performance characteristics. The main gap is lack of information about response format/structure, but for a tool with 100% schema coverage and clear purpose, this is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions 'Solana only' which relates to the 'chain' parameter's default, but this is already covered in the schema description. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Paginated list'), resource ('token holders'), and scope ('with token account address, owner wallet, raw token amount, and escrow flag'). It distinguishes from sibling 'token_top_holders' by specifying this is for 'raw holder data' versus 'enriched holder analysis', making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use for raw holder data') and when to use an alternative ('for enriched holder analysis use token_top_holders instead'). It also specifies constraints ('Solana only') and performance characteristics ('Light endpoint (no rate limit)'), offering comprehensive usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_infoAInspect
On-chain token metadata from Metaplex DAS: name, symbol, description, image, decimals, mint/freeze/update authorities, total supply, and current price. Use for basic token identification. Solana only. Light endpoint (no rate limit).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Solana or Base token mint address (base58 or 0x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: 'On-chain token metadata from Metaplex DAS' specifies the data source, 'Light endpoint (no rate limit)' discloses performance characteristics, and 'Solana only' sets blockchain constraints. However, it doesn't mention error conditions, response format, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and well-structured. Three sentences deliver maximum information with zero waste: first sentence states purpose and scope, second provides usage guidance, third adds behavioral context. Every sentence earns its place by adding distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description provides good contextual completeness. It covers purpose, usage guidelines, behavioral traits, and constraints. The main gap is the lack of output format description, which would be helpful since there's no output schema. However, for a metadata retrieval tool, the description provides sufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions 'Solana only' which relates to the chain parameter, but this is already covered in the schema's description. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('On-chain token metadata from Metaplex DAS') and resources ('name, symbol, description, image, decimals, mint/freeze/update authorities, total supply, and current price'). It distinguishes from siblings by specifying 'basic token identification' and 'Solana only', differentiating it from more specialized token analysis tools like token_holders or token_top_holders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use for basic token identification' defines the primary use case, 'Solana only' specifies the blockchain limitation, and 'Light endpoint (no rate limit)' indicates when this tool is preferable over alternatives that might have rate limits. The context clearly distinguishes it from more comprehensive token analysis tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_previewBInspect
Quick token overview returning current price, market cap, liquidity, 24h volume, holder count, and social links. Use this for a fast first look at any token before deeper analysis. Light endpoint (no rate limit).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Solana or Base token mint address (base58 or 0x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Fast (Light endpoint)' indicating performance, but fails to cover critical aspects like rate limits, authentication needs, error handling, or data freshness. For a tool with no annotation coverage, this leaves significant gaps in understanding operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, using two brief sentences that efficiently convey purpose and performance. Every word earns its place, with no redundant or verbose elements, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete for a tool with 2 parameters. It lists output metrics but lacks details on return format, data structure, or error responses. For a tool without structured output documentation, more context on what to expect from the 'overview' is needed to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters ('chain' and 'address') with descriptions and defaults. The description adds no parameter-specific information beyond implying token-related inputs through 'token overview', which doesn't enhance the schema's details. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides a 'quick token overview' with specific metrics (price, market cap, liquidity, volume, holder count, socials) and identifies it as a 'Light endpoint' for fast retrieval. It distinguishes from siblings by focusing on basic token metrics rather than specialized analyses like 'token_top_holders' or 'token_dev_profile', though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for a fast, basic token overview via 'Quick token overview' and 'Fast (Light endpoint)', suggesting it's suitable for initial assessments. However, it lacks explicit guidance on when to use this versus more detailed sibling tools (e.g., 'token_scan' for deeper analysis or 'token_top_holders' for specific data), and no exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_scanAInspect
Comprehensive token scan combining top traders, security flags (mint/freeze authority, honeypot risk), holder quality metrics, DEX pair data, and bundle/bot statistics in a single call. Returns a large JSON payload. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Solana or Base token mint address (base58 or 0x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates key operational traits: the comprehensive nature of the scan, that it's a 'heavy endpoint,' and the specific rate limit ('1 req/5sec'). This covers performance characteristics and usage constraints that aren't captured elsewhere.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely efficient - a single sentence packed with essential information. It front-loads the core purpose ('full token scan'), lists key data categories, and concludes with critical operational context. Every word earns its place with zero wasted verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 parameters (100% schema coverage) and no annotations, the description provides good behavioral context but lacks output information. Without an output schema, the description doesn't explain what format or structure the comprehensive scan results will return, which is a significant gap for understanding the tool's complete behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with both parameters clearly documented in the schema itself. The description adds no additional parameter information beyond what's already in the schema properties. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a 'full token scan' and lists specific data categories (top traders, security flags, holder quality metrics, DEX data, bundle/bot stats), which provides a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like token_best_traders or token_top_holders, which appear to offer overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'comprehensive (Heavy endpoint, 1 req/5sec)' which suggests this is a broad-scope tool with rate limitations. However, it doesn't explicitly state when to use this vs. more specialized sibling tools like token_best_traders or token_top_holders, nor does it provide clear exclusion criteria or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_team_supplyAInspect
Identify team/insider wallets holding a token: categorized as Fresh, Inactive, or Linked wallets with balance, supply percentage, and funding source. Detects coordinated supply control. Solana only. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Solana or Base token mint address (base58 or 0x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: Solana-only restriction, heavy endpoint nature, and rate limits (1 req/5sec per API key). It also mentions what the tool detects (coordinated supply control) which isn't obvious from the name or schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: first states the core functionality and outputs, second provides critical constraints (Solana-only, heavy endpoint, rate limits). Every phrase adds value with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides good context about what the tool does, its constraints, and operational characteristics. It could be more complete by mentioning output format or pagination, but covers the essential behavioral aspects well given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description doesn't add parameter-specific information beyond what's in the schema (chain and address parameters). It mentions Solana-only which aligns with the chain parameter's default, but doesn't provide additional semantic context for the address parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Identify team/insider wallets holding a token' with specific categorization (Fresh, Inactive, Linked) and metrics (balance, supply percentage, funding source). It distinguishes from siblings like token_holders or token_top_holders by emphasizing coordinated supply detection and Solana-only focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for analyzing token ownership patterns on Solana with coordinated supply control detection. It doesn't explicitly mention when not to use it or name alternatives, but the Solana-only restriction and heavy endpoint warning give practical guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_top_holdersCInspect
Top holders of a token enriched with realized PnL, profit stats, winrate, and wallet funding source. Use to assess holder quality and detect smart money positions. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Solana or Base token mint address (base58 or 0x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Heavy endpoint,' hinting at potential performance or rate limit concerns, but doesn't detail what 'heavy' entails (e.g., high latency, data volume, or cost). It also lacks information on permissions, error handling, or response format, leaving significant gaps for a tool that likely involves complex data retrieval.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with key information about the data returned. Both sentences are relevant: the first outlines the enriched metrics, and the second warns about endpoint heaviness. There's no unnecessary fluff, though it could be slightly more structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity implied by enriched financial metrics and no output schema, the description is incomplete. It doesn't explain the return values (e.g., format of PnL data), potential limitations, or how 'Heavy endpoint' affects usage. With no annotations and missing output details, it fails to provide enough context for reliable agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting the 'chain' and 'address' parameters. The description adds no specific parameter semantics beyond what the schema provides, such as explaining how the address is used or what 'PnL, profit stats' entail in relation to the parameters. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool provides 'Top holders enriched with PnL, profit stats, winrate, and funding source' which clarifies it retrieves token holder data with financial metrics. However, it doesn't specify the exact verb (e.g., 'retrieve' or 'fetch') or clearly distinguish it from sibling tools like 'token_best_traders' or 'cross_holders' that might also involve holder/trader data. The phrase 'Heavy endpoint' adds some context but doesn't fully define the core action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description mentions it's a 'Heavy endpoint,' which might imply performance considerations, but it doesn't specify scenarios, prerequisites, or exclusions. Given sibling tools like 'cross_holders' and 'token_best_traders,' there's no differentiation to help an agent choose appropriately.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transactions_parseAInspect
Parse Solana transaction signatures into decoded, human-readable data: token transfers, SOL movements, program interactions, and instruction details. Provide up to 100 transaction signatures. Solana only. Light endpoint (no rate limit).
| Name | Required | Description | Default |
|---|---|---|---|
| transactions | Yes | List of transaction signatures to parse (max 100). Returns decoded instruction data, token transfers, and program interactions. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool decodes transactions into human-readable data, supports up to 100 signatures, is Solana-specific, and is a 'Light endpoint (no rate limit)', which informs about performance and limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and efficiently lists key details in a single, well-structured sentence. Every element (e.g., data types, limits, constraints) earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (1 parameter, no output schema, no annotations), the description is adequate but has gaps. It covers purpose, constraints, and behavioral traits, but lacks details on error handling, response format, or prerequisites, which could be important for a parsing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'transactions' parameter. The description adds marginal value by reiterating the max of 100 signatures and the types of data returned, but does not provide additional syntax or format details beyond what the schema specifies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('parse'), resource ('Solana transaction signatures'), and scope ('into decoded, human-readable data: token transfers, SOL movements, program interactions, and instruction details'), distinguishing it from sibling tools that focus on chains, tokens, or wallets rather than transaction parsing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Parse Solana transaction signatures'), including constraints ('Provide up to 100 transaction signatures', 'Solana only') and performance characteristics ('Light endpoint (no rate limit)'), but does not explicitly mention when not to use it or name alternative tools among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_connectionsBInspect
Map SOL transfer connections for a wallet: who sent/received SOL, transfer amounts, and frequency. Reveals funding networks and wallet clusters. Solana only. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Wallet address | |
| min_sol | No | Minimum SOL transfer to include (default 0.1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It adds some context: 'Heavy endpoint' implies potential performance considerations like rate limits or slow responses, and 'Solana only' specifies blockchain constraints. However, it doesn't disclose critical behaviors like whether this is a read-only operation, what data format is returned, pagination, error handling, or authentication needs. The description provides basic constraints but lacks operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise at two short sentences. The first sentence efficiently states the core purpose and scope, while the second adds important constraints ('Solana only', 'Heavy endpoint'). There's no wasted text, and information is front-loaded. However, it could be slightly more structured by explicitly separating purpose from warnings.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is partially complete. It covers the what and some constraints, but lacks details on return values (critical with no output schema), error cases, or how results are structured. For a 'heavy endpoint' tool analyzing financial transactions, more operational context would help the agent use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (address and min_sol). The description adds no parameter-specific semantics beyond what's in the schema—it doesn't explain how 'min_sol' interacts with the analysis or provide examples of address formats. With high schema coverage, the baseline is 3, and the description doesn't compensate with additional parameter insights.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: analyzing SOL transfer connections for a wallet, including who sent/received SOL, amounts, and frequency. It specifies 'Solana only' which distinguishes it from potential cross-chain tools, though it doesn't explicitly differentiate from sibling tools like 'wallet_profile' or 'cross_traders'. The verb 'transfer connections' is specific but could be more precise about the operation type (e.g., 'retrieves' or 'analyzes').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It states 'Solana only' which limits context, and 'Heavy endpoint' warns about performance, but offers no explicit when-to-use criteria or alternatives. There's no mention of when to choose this over sibling tools like 'wallet_profile' (which might provide broader wallet data) or 'cross_traders' (which might focus on trading patterns). The agent must infer usage from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_historyAInspect
Wallet transaction history from Helius Enhanced Transactions API. Returns decoded transactions with token transfers, program interactions, and human-readable descriptions. Supports filtering by transaction type and source protocol. Solana only. Light endpoint (no rate limit).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of transactions to return (1-100, default 20) | |
| before | No | Pagination: transaction signature to start before (omit for most recent) | |
| source | No | Filter by source protocol (e.g. "RAYDIUM", "JUPITER"). Omit for all sources. | |
| address | Yes | Wallet address (base58 for Solana) | |
| tx_type | No | Filter by transaction type (e.g. "SWAP", "TRANSFER", "NFT_SALE"). Omit for all types. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond the schema: 'Light endpoint (no rate limit)' informs about performance and constraints, and 'Returns decoded transactions with token transfers, program interactions, and human-readable descriptions' describes output behavior. It does not cover error handling or pagination details, but it provides useful operational insights.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with three concise sentences that each add value: the first states the core purpose, the second details the return content and filtering, and the third specifies constraints and performance. There is no wasted language, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, output behavior, constraints (Solana-only, light endpoint), and filtering capabilities. However, it lacks details on error cases, response format structure, or pagination beyond the 'before' parameter, leaving some gaps for a tool with no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal semantic value beyond the schema, mentioning 'filtering by transaction type and source protocol' which aligns with tx_type and source parameters but does not provide additional syntax or format details. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving 'wallet transaction history' from the 'Helius Enhanced Transactions API' with specific details like 'decoded transactions with token transfers, program interactions, and human-readable descriptions.' It distinguishes itself from siblings by focusing on transaction history rather than other wallet or token-related functions, such as wallet_profile or token_info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'Solana only' and 'Light endpoint (no rate limit)' indicate when to use it. However, it does not explicitly mention when not to use it or name specific alternatives among sibling tools, such as wallet_profile for non-transaction data or transactions_parse for parsing individual transactions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_profileBInspect
Complete wallet profile: SOL balance, realized PnL, 7-day and 30-day profit stats, winrate, wallet funding source, and known labels/tags. Use to assess whether a wallet is smart money, a bot, or a fresh sybil. Heavy endpoint (rate-limited to 1 req/5sec per API key).
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: "sol" (default) or "base" | sol |
| address | Yes | Wallet address (base58 for Solana, 0x for Base) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a 'Heavy endpoint,' which hints at potential performance or rate limit considerations, but doesn't explicitly describe authentication needs, rate limits, error conditions, or what 'Heavy' entails. The description doesn't contradict annotations (none exist), but provides only limited behavioral context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, listing key data points in a single sentence followed by a brief performance note. Every element serves a purpose: the first part details what's returned, and 'Heavy endpoint' provides useful context. It could be slightly more structured but avoids unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description partially compensates by listing specific data points returned (e.g., SOL balance, PnL). However, it doesn't fully address behavioral aspects like error handling or the implications of 'Heavy endpoint,' and lacks details on output format. For a tool with 2 parameters and comprehensive return data, this is adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters (chain and address). The description doesn't add any parameter-specific information beyond what's in the schema, such as explaining how the address format affects the returned data. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what the tool does: it provides a complete wallet profile including SOL balance, PnL, profit stats, winrate, funding source, and labels. It specifies the resource (wallet) and the comprehensive data returned, though it doesn't explicitly distinguish from sibling tools like 'wallet_connections' beyond implying this is a comprehensive profile endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It mentions this is a 'Heavy endpoint,' which implies potential performance considerations, but doesn't specify when to use this tool versus alternatives like 'wallet_connections' or other wallet-related tools. No explicit when/when-not scenarios or prerequisites are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!