mcp-server
Server Details
Full Solana DeFi coverage: launchpads, tokens, trades, and wallets, decoded at scale.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- NeglectApp/mcp
- GitHub Stars
- 1
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
20 toolsget-api-healthAInspect
Returns 200 OK if the API service is healthy and responding.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a specific HTTP status code (200 OK) under specific conditions, which is useful behavioral information. However, it doesn't mention error conditions, response time expectations, or whether this is a lightweight check versus a comprehensive health assessment.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the complete purpose without any wasted words. It's front-loaded with the core functionality and appropriately sized for a simple health check tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter health check tool with no annotations and no output schema, the description provides sufficient context about what the tool does and what it returns. However, it doesn't specify the exact return format beyond '200 OK' (e.g., whether it returns JSON, plain text, or includes additional health metrics), which would be helpful given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and schema description coverage is 100% (empty schema). The description appropriately doesn't discuss parameters since none exist, which is correct for this case. No additional parameter information is needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Returns') and resource ('API service health'), specifying the exact condition ('200 OK if healthy and responding'). It distinguishes itself from sibling tools that focus on token, wallet, or audit data by targeting system status instead of blockchain data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('if the API service is healthy and responding'), suggesting it should be used for health checks or monitoring. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, which are all data retrieval tools rather than health checks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-latest-graduated-tokensBInspect
Fetches the most recently graduated tokens, ordered by graduation time (descending).
| Name | Required | Description | Default |
|---|---|---|---|
| queryParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions ordering and that it fetches 'most recently graduated' tokens, but doesn't disclose pagination behavior, rate limits, authentication requirements, data freshness, or what 'graduated' means in this context. For a read operation with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. Every word earns its place—'fetches' (action), 'most recently graduated tokens' (resource), 'ordered by graduation time (descending)' (sorting). No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations, no output schema, and 0% schema description coverage, the description is insufficient. It doesn't explain what 'graduated tokens' are, what data is returned, error conditions, or behavioral constraints. The context signals indicate complexity (nested objects, required parameters) that isn't addressed, leaving the agent with significant unknowns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no parameter information beyond what's in the schema (which has 0% description coverage). However, with only 1 parameter (limit) that's well-documented in the schema with default, range, and description, the baseline for 0 parameters mentioned in description is 4. The schema adequately covers the single parameter's semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('fetches') and resource ('most recently graduated tokens'), with specific ordering ('ordered by graduation time descending'). It distinguishes from siblings by focusing on 'graduated tokens' rather than other token attributes like price, metadata, or audit data. However, it doesn't explicitly contrast with similar tools like 'get-trending-tokens' or 'search-tokens'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context for 'graduated tokens', or when other tools like 'get-trending-tokens' or 'search-tokens' might be more appropriate. Usage is implied only by the tool's name and description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-mcp-configBInspect
Exposes MCP discovery schema for Smithery and other MCP clients.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'exposes' data, implying a read-only operation, but doesn't clarify aspects like authentication requirements, rate limits, error handling, or the format of the exposed schema. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any fluff or unnecessary details. It is front-loaded with the core action and resource, making it easy to understand at a glance. Every word earns its place in conveying essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple read operation with no parameters) and the lack of annotations and output schema, the description is minimally adequate. It explains what the tool does but doesn't provide behavioral details or output expectations. For a tool with no structured data beyond the input schema, it meets the basic requirement but could be more informative about the exposed schema's content or usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and the input schema has 100% description coverage (though empty). With no parameters, the description doesn't need to add semantic details beyond what the schema provides. The baseline for 0 parameters is 4, as there's nothing to compensate for, and the description appropriately focuses on the tool's purpose without redundant parameter information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Exposes MCP discovery schema for Smithery and other MCP clients.' It specifies the verb 'exposes' and the resource 'MCP discovery schema,' and mentions the target audience. However, it doesn't explicitly differentiate from sibling tools, which are all data retrieval tools but for different resources (e.g., tokens, wallets, audits).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Smithery and other MCP clients' as the audience, but doesn't specify scenarios, prerequisites, or exclusions. Without such context, users must infer usage based on the tool name and purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-pair-auditBInspect
Returns audit data for a single liquidity pair and its underlying token. Includes token-level distribution (developer, top holders, insiders, bundlers) and pair-level ownership breakdown (snipers, LP burn).
| Name | Required | Description | Default |
|---|---|---|---|
| pathParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a read operation ('Returns'), which implies it's non-destructive, but doesn't cover other behavioral aspects like authentication needs, rate limits, error conditions, or response format. The description adds some context about what data is included, but lacks comprehensive behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core purpose, and the second elaborates on included data types. Every phrase adds value without redundancy, making it appropriately sized and front-loaded with the main function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is partially complete. It clearly explains what data is returned but lacks details on parameters, behavioral constraints, and output structure. For a read-only audit tool, it provides enough to understand the scope but leaves gaps in practical usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description doesn't mention any parameters, while the schema has 1 parameter with 0% description coverage. The description implies a single pair is targeted but doesn't explain how to specify it. With low schema coverage, the description doesn't compensate by explaining the 'pairAddress' parameter's format or requirements, though the tool's purpose inherently suggests a pair identifier is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns audit data for a single liquidity pair and its underlying token.' It specifies the verb ('Returns') and resource ('audit data'), and details the scope ('single liquidity pair and its underlying token') with examples of what's included. However, it doesn't explicitly differentiate from its closest sibling 'get-pair-audit-multiple', which handles multiple pairs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention the sibling 'get-pair-audit-multiple' for multiple pairs, nor does it specify prerequisites or exclusions. Usage is implied by the scope ('single liquidity pair'), but no explicit when/when-not instructions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-pair-audit-multipleAInspect
Returns audit data for multiple liquidity pairs in a single batch call. Accepts up to 10 pair addresses via the addresses query parameter. Each entry includes pair-level and token-level audit details including developer, top holders, insiders, bundlers, snipers, and LP burn distribution.
Cost: 10 credits per request.
| Name | Required | Description | Default |
|---|---|---|---|
| queryParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a read operation (implied by 'Returns'), includes cost information (10 credits per request), and specifies batch limits (up to 10 addresses). However, it doesn't mention error handling, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with two sentences: the first states purpose and key parameter details, the second provides cost information. Every element serves a clear purpose with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations, no output schema, and 0% schema coverage, the description provides good basic information about purpose, parameters, and cost, but lacks details about response format, error conditions, or authentication requirements that would be helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the single parameter, the description compensates well by explaining the 'addresses' parameter accepts comma-separated lists of up to 10 pair addresses and provides an example format. It adds meaningful context beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Returns audit data') and resources ('multiple liquidity pairs'), and distinguishes it from its sibling 'get-pair-audit' by specifying batch capability for up to 10 pairs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (batch retrieval of audit data for multiple pairs), but does not explicitly state when not to use it or name alternatives like 'get-pair-audit' for single-pair queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-token-athBInspect
Returns the all-time-high (ATH) market cap and the timestamp when it occurred, based on all historical trades for the token.
| Name | Required | Description | Default |
|---|---|---|---|
| pathParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states it's a read operation ('Returns'), but doesn't mention rate limits, authentication requirements, data freshness, error conditions, or what happens with invalid token addresses. The description is minimal and lacks important operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that immediately states the tool's purpose without unnecessary words. Every element ('Returns', 'ATH market cap', 'timestamp', 'based on all historical trades') contributes directly to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations, no output schema, and 0% schema description coverage, the description is insufficient. It doesn't explain return format, error handling, data sources, or operational constraints. While concise, it leaves too many contextual gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (the schema has no parameter descriptions), the description doesn't mention the tokenAddress parameter at all. However, since there's only one parameter and the tool name/title imply token specificity, the description is minimally adequate but doesn't compensate for the schema gap by explaining parameter format or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns') and resource ('all-time-high market cap and timestamp') with precise scope ('based on all historical trades for the token'). It effectively distinguishes from siblings like get-token-price or get-token-performance by focusing exclusively on ATH data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when ATH data is needed, but provides no explicit guidance on when to choose this tool over alternatives like get-token-history-range or get-token-performance. No prerequisites, exclusions, or comparison with sibling tools are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-token-auditBInspect
Returns audit data for a given token address, including developer, top holder, insider, bundler, sniper, and LP burn distributions.
| Name | Required | Description | Default |
|---|---|---|---|
| pathParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions what data is returned, it doesn't address important behavioral aspects like rate limits, authentication requirements, error conditions, pagination, or whether this is a read-only operation (though 'Returns' implies reading). More context about the operation's characteristics is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and lists specific data elements. Every word serves a purpose with zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 0% schema description coverage, the description provides basic but incomplete context. It covers what data is returned but lacks details about return format, error handling, authentication, and differentiation from sibling tools. For a tool with one parameter and no structured metadata, this is minimally adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (the tokenAddress parameter has only a basic type description), so the description must compensate. The description mentions 'for a given token address' which provides context for the single parameter, but doesn't add format details, validation rules, or examples beyond what's minimally implied. This meets the baseline for adequate but incomplete parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns audit data for a given token address' with specific data types listed (developer, top holder, etc.). It uses a specific verb ('Returns') and resource ('audit data'), but doesn't explicitly differentiate from sibling tools like 'get-token-audit-multiple' or 'get-pair-audit'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get-token-audit-multiple' (for multiple tokens) or 'get-pair-audit' (for token pairs). It only states what the tool does, not when it's appropriate versus other audit-related tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-token-audit-multipleBInspect
Returns audit data for multiple tokens in a single batch call. Accepts up to 10 token addresses via a comma-separated addresses query parameter. Each entry includes developer, top holder, insider, bundler, sniper, and LP burn distributions.
Cost: 10 credits per request.
| Name | Required | Description | Default |
|---|---|---|---|
| queryParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses cost (10 credits per request) and batch limit (up to 10 token addresses), which are useful behavioral traits. However, it doesn't cover other aspects like rate limits, error handling, authentication needs, or response format. The description adds some value but leaves gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by details on parameters and cost. Every sentence earns its place with no wasted words. The structure is clear and efficient, making it easy to scan and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and low schema coverage, the description is moderately complete. It covers purpose, parameter format, and cost, but lacks details on return values (e.g., what 'developer, top holder, insider, bundler, sniper, and LP burn distributions' entail), error cases, or authentication. For a tool with one parameter but complex output implied, it should do more to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning by specifying the parameter accepts 'up to 10 token addresses via a comma-separated `addresses` query parameter,' which clarifies format and constraints beyond the schema's basic type and example. However, it doesn't explain what 'token mint addresses' are or provide examples, leaving some semantics unclear. The description partially compensates but not fully.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Returns audit data for multiple tokens in a single batch call,' specifying the verb (returns), resource (audit data), and scope (multiple tokens). It distinguishes from sibling 'get-token-audit' by emphasizing batch capability, though it doesn't explicitly name that sibling. The purpose is specific but could be more explicit about differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for batch token audits, but doesn't explicitly state when to use this vs. alternatives like 'get-token-audit' (single token) or other audit tools. It mentions a cost of 10 credits per request, which provides some context, but lacks clear guidance on prerequisites or exclusions. Usage is implied rather than explicitly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-token-historyBInspect
Fetches historical price, market cap, or liquidity snapshots for a token across multiple time windows.
| Name | Required | Description | Default |
|---|---|---|---|
| body | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions fetching 'snapshots' and 'multiple time windows,' hinting at batch retrieval, but fails to disclose critical behaviors like rate limits, authentication needs, pagination, or error handling. This is inadequate for a tool with potential complexity in data retrieval.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads key information (action, resource, metrics). Every word earns its place, with no redundancy or fluff, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and low schema coverage, the description is incomplete. It covers the tool's purpose and parameter semantics adequately but lacks behavioral details (e.g., data format, limits) and output expectations, leaving gaps for the agent to handle a historical data fetch operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context beyond the schema: it clarifies that metrics include 'price, market cap, or liquidity' (matching the enum) and specifies 'multiple time windows,' which the schema omits. With 0% schema description coverage, this compensates well, though it doesn't detail the 'token' parameter's address format or default time windows.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('fetches') and resource ('historical price, market cap, or liquidity snapshots for a token'), making the purpose evident. It distinguishes from siblings like 'get-token-price' (single metric) and 'get-token-history-range' (specific range) by mentioning multiple metrics and time windows, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for historical data across multiple time windows, suggesting it's for trend analysis rather than current data. However, it lacks explicit guidance on when to use this versus siblings like 'get-token-price' (current price) or 'get-token-history-range' (custom range), leaving the agent to infer from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-token-history-rangeCInspect
Returns the lowest and highest values of selected metrics (price, market cap, liquidity) within a specified time range for a token.
| Name | Required | Description | Default |
|---|---|---|---|
| body | Yes | ||
| pathParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions 'lowest and highest values' but doesn't clarify if this is a read-only operation, what permissions are needed, rate limits, error handling, or response format. The input schema hints at behavior (e.g., 'mode' parameter for range clamping), but the description doesn't explain these traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality. It wastes no words, though it could be more structured (e.g., separating purpose from constraints). Every part earns its place, but it's slightly terse given the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (2 parameters with nested objects, no output schema, and no annotations), the description is incomplete. It lacks details on behavioral traits, output format, error handling, and sibling differentiation. For a tool that returns aggregated historical data, this leaves significant gaps for an AI agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but adds little. It mentions 'selected metrics (price, market cap, liquidity)' and 'specified time range,' which aligns with schema parameters but doesn't provide additional semantics like format details or usage examples. With 0% coverage, this is inadequate compensation, but the description at least hints at parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns the lowest and highest values of selected metrics... within a specified time range for a token.' It specifies the verb ('returns'), resource ('token'), and scope ('lowest and highest values of selected metrics'), though it doesn't explicitly differentiate from sibling tools like 'get-token-history' or 'get-token-price'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get-token-history' (which might return full historical data) or 'get-token-price' (which might return current price), leaving the agent to guess based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-token-holdersCInspect
Returns ranked holder wallets, amounts, and percentage ownership for a given token address.
| Name | Required | Description | Default |
|---|---|---|---|
| pathParams | Yes | ||
| queryParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool returns data but doesn't mention whether it's read-only, has rate limits, requires authentication, or how it handles errors. The description lacks critical behavioral context like pagination (implied by 'limit' parameter but not explained), data freshness, or what 'ranked' means (e.g., by amount descending).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality. There's no wasted language or redundancy. However, it could be slightly more structured by separating the input requirement from the output description for even clearer parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 parameters, no annotations, no output schema, and 0% schema description coverage, the description is inadequate. It doesn't explain the return format beyond listing data fields, doesn't address behavioral aspects like rate limits or authentication, and leaves the 'limit' parameter completely undocumented. The description should provide more context given the lack of structured documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description mentions 'token address' which maps to one parameter, but doesn't explain the 'limit' parameter at all. It adds minimal value beyond the parameter names themselves, failing to compensate for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Returns') and the specific data returned ('ranked holder wallets, amounts, and percentage ownership'), along with the required input ('for a given token address'). It distinguishes itself from siblings by focusing on token holder data rather than price, metadata, or other token attributes. However, it doesn't explicitly contrast with similar tools like 'get-wallet-holdings' which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, appropriate contexts, or compare with sibling tools like 'get-wallet-holdings' or 'search-tokens' that might provide related data. The agent must infer usage from the description alone without explicit direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-token-marketdataAInspect
Returns real-time metrics including price, market capitalization, FDV, liquidity, total and circulating supply, and current holder count. This endpoint does not use time windows and does not include trade statistics.
| Name | Required | Description | Default |
|---|---|---|---|
| pathParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that the tool returns real-time data and excludes time windows/trade statistics, which is useful behavioral context. However, it doesn't mention rate limits, authentication needs, error conditions, or response format details that would be important for a data-fetching tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first lists all returned metrics, the second clarifies exclusions. Every word earns its place with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a data retrieval tool with no annotations and no output schema, the description provides good coverage of what metrics are returned and what's excluded. However, it lacks information about response structure, potential errors, and the single parameter's semantics, leaving gaps for the agent to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides no information about the single required parameter (tokenAddress) beyond what's minimally implied by the tool name. No format examples, constraints, or semantic meaning are explained in the description text.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Returns') and resources ('real-time metrics'), listing exactly what metrics are included. It distinguishes from siblings like get-token-price (price only) and get-token-history (historical data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by stating 'does not use time windows and does not include trade statistics,' which helps differentiate from tools like get-token-history-range. However, it doesn't explicitly name alternatives or specify when-not-to-use scenarios beyond the exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-token-metadataCInspect
Returns the metadata for a given token address stored in the database. Social fields are omitted.
| Name | Required | Description | Default |
|---|---|---|---|
| pathParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but only states it returns metadata from a database. It doesn't disclose behavioral traits such as whether it's read-only, performance characteristics, error handling, or what metadata fields are included beyond social fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and to the point with two sentences, front-loading the main purpose. However, the second sentence about social fields could be integrated more smoothly, and it lacks structural elements like examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and low schema coverage, the description is incomplete. It doesn't explain what metadata is returned, how to interpret results, or address potential complexities like missing tokens or database errors.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but adds no parameter semantics. It doesn't explain what 'token address' means beyond the schema's example, nor clarify the format or validation for the address parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Returns') and resource ('metadata for a given token address'), specifying it's stored in the database. It distinguishes from siblings by noting social fields are omitted, but doesn't explicitly differentiate from similar tools like get-token-audit or get-token-marketdata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like get-token-audit or get-token-marketdata is provided. The description mentions social fields are omitted, which hints at a limitation but doesn't give explicit usage context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-token-performanceBInspect
Returns percentage performance change for selected metrics over multiple time windows compared to the current value.
| Name | Required | Description | Default |
|---|---|---|---|
| body | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns percentage changes but doesn't specify what 'multiple time windows' means (e.g., predefined intervals like 1h, 24h, 7d), whether it's a read-only operation (implied by 'get' but not explicit), or any rate limits or authentication needs. This leaves significant gaps for an agent to understand the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality. There's no wasted language, and it directly communicates what the tool does without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (performance calculations over time windows), lack of annotations, and no output schema, the description is incomplete. It doesn't explain the return format (e.g., structure of percentage changes), what 'multiple time windows' entails, or how metrics are calculated. For a tool with nested parameters and no structured guidance, this leaves too much ambiguity for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter (body) with nested properties, and schema description coverage is 0%, meaning parameters are undocumented in the schema. The description compensates by mentioning 'selected metrics' and 'multiple time windows,' which align with the schema's 'metrics' array and imply additional time-related parameters not explicitly in the schema. However, it doesn't fully detail all parameters (e.g., token address is mentioned only in schema).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns percentage performance change for selected metrics over multiple time windows compared to the current value.' It specifies the verb ('Returns'), resource ('percentage performance change'), and scope ('selected metrics', 'multiple time windows', 'compared to current value'). However, it doesn't explicitly differentiate from siblings like get-token-price or get-token-history, which might provide related but different data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like get-token-price (which might return price data without performance changes) or get-token-history (which might provide historical data without percentage calculations). There's no context about prerequisites, such as needing a valid token address or specific time windows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-token-priceCInspect
Returns the latest price, market cap, and liquidity for a specific token address.
| Name | Required | Description | Default |
|---|---|---|---|
| pathParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns data but doesn't specify if it's a read-only operation, any rate limits, authentication needs, error handling, or data freshness. For a tool fetching financial data, this lack of transparency is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality: 'Returns the latest price, market cap, and liquidity for a specific token address.' There is no wasted text, and it directly communicates the tool's purpose without unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of financial data retrieval, no annotations, no output schema, and low parameter schema coverage, the description is incomplete. It doesn't cover return values, error cases, data sources, or behavioral traits, leaving the agent with insufficient context for reliable use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, but the tool description adds no parameter semantics beyond implying a 'token address' is needed. It doesn't explain format (e.g., Ethereum address), validation, or examples. With low schema coverage, the description fails to compensate adequately, but the single parameter is straightforward.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns the latest price, market cap, and liquidity for a specific token address.' It specifies the verb ('returns') and the resources (price, market cap, liquidity) for a token. However, it doesn't explicitly differentiate from sibling tools like 'get-token-marketdata' or 'get-token-history', which may offer overlapping or related data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get-token-marketdata' or 'get-token-history', nor does it specify any prerequisites, exclusions, or contextual cues for usage. The agent must infer usage from the tool name and description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-tokens-by-deployerCInspect
Returns a list of tokens where the specified wallet address is the developer (devWallet).
| Name | Required | Description | Default |
|---|---|---|---|
| pathParams | Yes | ||
| queryParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a read operation ('Returns a list'), which is helpful, but lacks critical details: no mention of pagination (only limit parameter), rate limits, authentication requirements, error conditions, or what format the returned tokens will have. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point with zero wasted words. It's appropriately sized for a simple lookup tool and front-loads the core functionality. Every word earns its place in communicating the essential purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 0% schema description coverage, the description is incomplete. It doesn't explain what 'tokens' means in this context (cryptocurrency tokens? authentication tokens?), what fields are returned, error handling, or important behavioral constraints. For a tool with 2 parameters and complex sibling relationships, this leaves too many unanswered questions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning the schema provides no parameter descriptions. The description mentions 'specified wallet address' which maps to 'walletAddress' parameter, but doesn't explain the 'limit' parameter at all. It adds minimal semantic value beyond what's obvious from parameter names, failing to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Returns a list of tokens') and the specific resource/condition ('where the specified wallet address is the developer (devWallet)'). It distinguishes itself from siblings like 'get-wallet-holdings' or 'get-token-metadata' by focusing on deployer relationship rather than holdings or metadata. However, it doesn't explicitly contrast with all siblings, keeping it at 4 instead of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when this tool is appropriate (e.g., finding tokens created by a specific developer) versus when to use siblings like 'search-tokens' or 'get-wallet-holdings'. There's no context about prerequisites, alternatives, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-trending-tokensCInspect
Returns a list of trending tokens ordered by transaction count for a given interval.
| Name | Required | Description | Default |
|---|---|---|---|
| queryParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions ordering and interval but doesn't disclose behavioral aspects like pagination, rate limits, authentication requirements, data freshness, or what constitutes a 'trending' threshold. For a read operation with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It front-loads the core purpose and includes key details (ordering, interval) without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (1 parameter with nested object, no output schema, no annotations), the description is incomplete. It doesn't explain return format (e.g., token fields, transaction counts), error conditions, or how 'trending' is calculated beyond transaction count ordering. For a tool with no structured output documentation, this leaves the agent guessing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description adds meaning by explaining that the interval is 'for counting transactions' and that results are 'ordered by transaction count'. However, it doesn't detail the parameter's enum values or default, leaving the schema to handle those specifics. This provides moderate compensation for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Returns') and resource ('list of trending tokens'), and specifies ordering ('by transaction count') and scoping ('for a given interval'). However, it doesn't explicitly differentiate from sibling tools like 'get-token-history' or 'search-tokens' that might also return token lists.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With many sibling tools returning token-related data (e.g., 'get-token-history', 'search-tokens'), the description lacks context about when trending tokens are preferred over other token queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-wallet-holdingsBInspect
Retrieve the token holdings of a given wallet address, including balances and percentage ownership per token.
| Name | Required | Description | Default |
|---|---|---|---|
| pathParams | Yes | ||
| queryParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While it states what data is retrieved, it doesn't mention important behavioral aspects like rate limits, authentication requirements, error conditions, pagination behavior (implied by 'limit' parameter but not explained), or whether this is a real-time or cached query.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates the core functionality without any wasted words. It's appropriately sized for the tool's complexity and gets straight to the point.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 parameters, no annotations, and no output schema, the description is insufficient. It doesn't explain the return format, error handling, or important behavioral constraints. While concise, it leaves too many operational questions unanswered for proper agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions 'wallet address' and 'token holdings' which aligns with the two parameters, but with 0% schema description coverage, it doesn't provide the specific details about parameter formats, constraints, or the 'limit' parameter's purpose and default value. The schema provides these details, so the description adds minimal value beyond what's already in the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('retrieve'), resource ('token holdings of a given wallet address'), and scope ('including balances and percentage ownership per token'). It distinguishes itself from sibling tools like 'get-wallet-token-pnl' by focusing on holdings rather than profit/loss calculations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get-token-holders' or 'get-wallet-token-pnl'. It doesn't mention prerequisites, limitations, or appropriate contexts for invocation beyond the basic purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-wallet-token-pnlCInspect
Calculate realized and unrealized profit and loss (PnL) for a wallet on a specific token.
| Name | Required | Description | Default |
|---|---|---|---|
| pathParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'calculate' PnL but does not specify whether this is a read-only operation, requires authentication, involves rate limits, or details the computational process (e.g., based on historical data). This leaves significant gaps in understanding the tool's behavior and constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key action and target without unnecessary details. It is appropriately sized for the tool's complexity, with zero waste or redundancy, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of PnL calculation, lack of annotations, no output schema, and low schema description coverage, the description is incomplete. It does not address behavioral aspects like data sources, timeframes for calculations, error handling, or return format, leaving the agent with insufficient context to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description implies parameters for wallet and token but does not add meaning beyond what the input schema provides. Schema description coverage is 0%, but the schema itself documents 'walletAddress' and 'tokenAddress' with basic descriptions. The description compensates slightly by clarifying the purpose (PnL calculation), but it does not explain parameter formats, validation rules, or interactions, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('calculate') and the target ('realized and unrealized profit and loss for a wallet on a specific token'), which is specific and unambiguous. However, it does not explicitly differentiate from sibling tools like 'get-wallet-holdings' or 'get-token-performance', which might also relate to wallet or token metrics, though the focus on PnL is distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'get-wallet-holdings' for overall holdings or 'get-token-performance' for broader token metrics. It lacks context about prerequisites, exclusions, or specific scenarios where PnL calculation is appropriate, leaving the agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search-tokensBInspect
Performs a flexible search for tokens by name, symbol, or address, with optional filters, sorting, and symbol-only mode.
| Name | Required | Description | Default |
|---|---|---|---|
| queryParams | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions 'flexible search' and 'optional filters, sorting, and symbol-only mode,' which gives some behavioral context. However, it lacks critical details: whether this is a read-only operation, if it requires authentication, rate limits, pagination behavior (beyond the 'limit' parameter), or what happens with empty results. For a search tool with no annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's core functionality and key features. It front-loads the main purpose and lists additional capabilities without unnecessary words. Every part earns its place by adding useful information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a search tool with nested parameters), no annotations, and no output schema, the description is incomplete. It lacks behavioral transparency details (e.g., safety, auth), doesn't fully explain parameters (e.g., 'symbol-only mode'), and provides no information on return values or error handling. For a tool with these gaps, it should do more to guide the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description adds value by explaining that search is 'by name, symbol, or address' and mentions 'optional filters, sorting, and symbol-only mode,' which helps interpret the 'searchQuery' and other parameters. However, it doesn't detail specific filters or what 'symbol-only mode' entails, leaving gaps for the 1 parameter (a nested object with 4 sub-parameters).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Performs a flexible search for tokens by name, symbol, or address' with specific search fields. It distinguishes itself from siblings by focusing on flexible search capabilities, unlike tools like 'get-token-metadata' or 'get-token-price' which retrieve specific data. However, it doesn't explicitly contrast with 'get-trending-tokens' or 'get-tokens-by-deployer' which also retrieve token lists.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through 'flexible search' with 'optional filters, sorting, and symbol-only mode,' suggesting it's for general token discovery. However, it provides no explicit guidance on when to use this versus alternatives like 'get-trending-tokens' for popular tokens or 'get-tokens-by-deployer' for deployer-specific tokens. The context is clear but lacks comparative exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!