Lightyear CryptoPunks
Server Details
Browse traits, filter 10K punks, listings, bids, Merkle roots, and bid pricing for CryptoPunks.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 11 of 11 tools scored.
Most tools have distinct purposes, but some overlap exists: 'browse_traits' and 'browse_types' both list metadata, which could cause confusion if an agent needs general browsing. However, descriptions clarify their specific focuses (traits vs. types), and other tools like 'filter_punks' and 'get_punk_details' serve clearly different functions.
All tool names follow a consistent verb_noun pattern with snake_case, such as 'browse_traits', 'filter_punks', and 'get_floor_price'. This predictability makes it easy for agents to understand and navigate the toolset without confusion from mixed naming conventions.
With 11 tools, the server is well-scoped for its CryptoPunks domain, covering browsing, filtering, market analysis, and bid management. Each tool appears to earn its place by addressing specific aspects of the ecosystem, avoiding both excessive bloat and insufficient coverage.
The toolset provides comprehensive coverage for browsing, filtering, market data, and bid operations in the CryptoPunks domain. Minor gaps exist, such as no direct tools for placing bids or managing listings, but agents can work around this using the provided analysis and retrieval tools for actionable insights.
Available Tools
11 toolsbrowse_traitsARead-onlyInspect
List all CryptoPunk traits with their counts. Optionally filter by type to see how many punks of that type have each trait. Rate limit: 10 per 10 min (read bucket — shared with browse_types, get_punk_details, get_listings, get_floor_price, get_bids_for_punk, get_bids_for_merkle_root).
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Filter counts to a specific type (e.g. 'Male', 'Zombie') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With readOnlyHint=true in annotations, the safety profile is established. The description adds valuable behavioral context by explaining that filtering affects the counts ('see how many punks of that type have each trait'). However, it omits details about return format, pagination, or rate limiting that would be helpful given the lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient, front-loaded sentences with zero waste. The first sentence establishes the core operation, and the second immediately addresses the optional parameter usage, providing maximum information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single optional parameter, no nested objects) and 100% schema coverage, the description is appropriately complete. It conceptually describes the return value ('traits with their counts') compensating for the missing output schema, though explicit return structure details would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds contextual meaning by explaining the purpose of filtering ('to see how many punks of that type have each trait'), reinforcing the schema description without adding redundant technical syntax.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List[s] all CryptoPunk traits with their counts,' specifying the verb (List), resource (CryptoPunk traits), and returned data (counts). It implicitly distinguishes from sibling 'browse_types' by focusing on traits rather than types, though it does not explicitly contrast the two tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance by explaining the optional filter parameter ('Optionally filter by type...'), indicating when to use the parameter. However, it lacks explicit guidance on when to use this tool versus alternatives like 'filter_punks' or 'get_punk_details' for different data needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
browse_typesARead-onlyInspect
List all CryptoPunk type categories with their counts. Rate limit: 10 per 10 min (read bucket — shared with browse_traits, get_punk_details, get_listings, get_floor_price, get_bids_for_punk, get_bids_for_merkle_root).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, and the description confirms this with 'List'. It adds valuable behavioral context that the operation returns 'counts' alongside categories, which is not evident from the annotation alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence delivers all necessary information without redundancy. Front-loaded with action verb and fully qualified object. Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, read-only annotation, and low complexity, the description adequately covers the tool's function. While no output schema exists, the return value (categories with counts) is clearly implied by the description for this simple listing operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, which per guidelines establishes a baseline of 4. No parameter documentation is required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'List' with clear resource 'CryptoPunk type categories' and scope 'with their counts'. It effectively distinguishes from sibling 'browse_traits' by specifying 'type categories' rather than traits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through specificity (getting type counts), but lacks explicit 'when to use' guidance or contrast with siblings like 'browse_traits'. No prerequisites or exclusions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compute_merkle_rootARead-onlyInspect
Compute the Merkle root for a set of CryptoPunk IDs. The root matches the on-chain Solady MerkleProofLib encoding used by the Stash contract. Rate limit: 5 per 10 min (compute bucket — shared with filter_punks, resolve_merkle_root).
| Name | Required | Description | Default |
|---|---|---|---|
| punkIds | Yes | Punk IDs to include in the Merkle tree (1-10000) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While readOnlyHint: true indicates a safe read operation, the description adds critical behavioral context by specifying the exact on-chain library (Solady MerkleProofLib) and target contract (Stash) used for encoding. This ensures agents understand the deterministic output format matches blockchain expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first defines the core operation, and the second specifies the critical encoding standard. Every word earns its place, and the technical details are front-loaded for immediate utility.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single well-documented parameter and read-only nature, the description is nearly complete. The Solady/Stash reference provides essential domain context. Minor gap: without an output schema, explicit mention of the return format (e.g., hex string) would strengthen completeness for a cryptographic computation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Punk IDs to include in the Merkle tree'), the schema fully documents the single punkIds parameter. The description references 'set of CryptoPunk IDs' but does not add semantic details beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Compute' + resource 'Merkle root' + scope 'CryptoPunk IDs' clearly defines the operation. The additional detail about 'Solady MerkleProofLib encoding used by the Stash contract' distinguishes this from generic Merkle tools and siblings like resolve_merkle_root.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The reference to 'Stash contract' and 'Solady MerkleProofLib encoding' provides clear contextual guidance for when to use this tool (generating roots compatible with that specific contract). However, it lacks explicit contrast with siblings like resolve_merkle_root or get_bids_for_merkle_root to clarify when to compute vs. query.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
filter_punksARead-onlyInspect
Filter CryptoPunks by type and traits. Returns matching punk IDs and their Merkle root. Rate limit: 5 per 10 min (compute bucket — shared with compute_merkle_root, resolve_merkle_root).
| Name | Required | Description | Default |
|---|---|---|---|
| types | No | Type names to include (e.g. ['Male', 'Zombie']). Empty = all types. | |
| matchMode | No | 'all' = punk must have every included trait, 'any' = at least one | all |
| excludedTraits | No | Traits to exclude | |
| includedTraits | No | Traits to include |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true, the description adds valuable behavioral context about the return format (punk IDs + Merkle root) which compensates for the missing output schema. It does not mention result set limits, pagination, or authentication requirements, but the safety profile is covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states the filtering action, second states the return value. Every word earns its place; no redundancy with schema or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description appropriately discloses the return structure (IDs and Merkle root). All parameters are well-documented in the schema. Minor gap: could note that all parameters are optional (0 required) indicating flexible filtering, or mention approximate result set sizes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all four parameters (types, matchMode, excludedTraits, includedTraits) including enum values and examples. The description mentions filtering 'by type and traits' but adds no semantic details beyond the comprehensive schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Filter') and resources ('CryptoPunks') and distinguishes from siblings by specifying it returns 'matching punk IDs and their Merkle root'—unlike browse_traits or browse_types which return metadata categories, or get_punk_details which returns individual punk data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context by mentioning the return values (IDs + Merkle root), suggesting when to use it versus browsing tools. However, lacks explicit guidance on when to prefer this over browse_traits/browse_types for discovery, or when to use compute_merkle_root instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_bid_recommendationARead-onlyInspect
Analyze market data and recommend a bid price range for a CryptoPunk trait selection. Combines floor price, competing bids, and set composition into actionable guidance. composition lists only the traits that define the selection; use absentTraits for a full exclusion list when needed. Rate limit: 1 per 1 min (recommend bucket).
| Name | Required | Description | Default |
|---|---|---|---|
| types | No | Type names to filter (e.g. ['Male', 'Zombie']) | |
| punkIds | No | Explicit punk IDs (overrides trait filters) | |
| matchMode | No | all | |
| excludedTraits | No | Traits to exclude | |
| includedTraits | No | Traits to include |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable methodological context beyond the readOnlyHint annotation, explaining that it combines 'floor price, competing bids, and set composition' into guidance. This transparency about data sources is helpful. However, with no output schema provided, it omits what the recommendation format looks like (e.g., price ranges, confidence scores).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is perfectly efficient: the first states purpose and scope, the second explains methodology. There is no redundant text or generic filler, and the most critical information (recommendation purpose) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the moderate complexity (5 optional parameters) and the absence of an output schema, the description adequately covers the analytical approach but should ideally describe the return structure (e.g., min/max bid recommendations) to be fully complete. The readOnlyHint annotation covers the safety profile.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 80% schema description coverage, the baseline is appropriately met. The description mentions 'trait selection' which contextualizes the includedTraits/excludedTraits parameters, but does not add specific semantics about the matchMode logic or the punkIds override behavior beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Analyze', 'recommend') and identifies the resource ('CryptoPunk trait selection') and output ('bid price range'). It clearly distinguishes from siblings like get_bids_for_punk (specific IDs) and get_floor_price (general floor) by focusing on trait-based recommendations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through 'trait selection,' suggesting this tool is for trait-based analysis rather than specific punk queries. However, it lacks explicit guidance on when to use this versus get_bids_for_punk or filter_punks, and does not mention the punkIds override capability mentioned in the schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_bids_for_merkle_rootARead-onlyInspect
Get pending EIP-712 bids matching a specific Merkle root. Useful for seeing competition on a trait-based bid set. Rate limit: 10 per 10 min (read bucket — shared with browse_types, browse_traits, get_punk_details, get_listings, get_floor_price, get_bids_for_punk).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max bids to return (default 5) | |
| merkleRoot | Yes | Merkle root (0x-prefixed, 32 bytes hex) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond the readOnlyHint annotation by specifying 'pending' (indicating active bids only) and 'EIP-712' (the signature standard). However, it omits behavioral details like return format, sorting order, or what happens when no bids match, which is significant given the lack of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. The first sentence front-loads the core function; the second sentence immediately provides usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a read-only tool with complete schema coverage, but gaps remain. Given the complexity of Merkle roots and the absence of an output schema, the description should ideally describe the return structure (e.g., 'returns array of bid objects') or explain the relationship to sibling tools like compute_merkle_root.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both merkleRoot (hex pattern) and limit (default 5). The description mentions 'matching a specific Merkle root' which aligns with the required parameter, but adds no additional semantic details beyond the schema's excellent documentation, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get pending EIP-712 bids'), the resource (bids), and the filter mechanism ('matching a specific Merkle root'). It effectively distinguishes from sibling get_bids_for_punk by specifying 'trait-based bid set', indicating this operates on collections/traits rather than individual assets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence provides implied usage context ('Useful for seeing competition on a trait-based bid set'), suggesting when to use it. However, it lacks explicit guidance on when NOT to use it, and fails to mention related workflow siblings like compute_merkle_root or resolve_merkle_root that may be prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_bids_for_punkARead-onlyInspect
Get pending EIP-712 bids that include a specific CryptoPunk. Returns bids from the CryptoPunks Bids API. Rate limit: 10 per 10 min (read bucket — shared with browse_types, browse_traits, get_punk_details, get_listings, get_floor_price, get_bids_for_merkle_root).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max bids to return (default 5) | |
| punkId | Yes | Punk ID to look up bids for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With readOnlyHint=true in annotations, the description adds valuable context by specifying the bid state ('pending') and technical standard ('EIP-712'), and discloses the data source ('CryptoPunks Bids API'). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. The first sentence front-loads the action and scope; the second clarifies the source. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with 100% schema coverage and simple parameters, the description is complete. It appropriately mentions the API source. It lacks elaboration on the bid return structure, though this is partially mitigated by the descriptive name and schema completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both punkId and limit parameters. The description mentions 'specific CryptoPunk' which aligns with punkId, but adds no semantic detail beyond what the schema already provides, meriting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with a specific resource ('pending EIP-712 bids') and clearly scopes the tool to a 'specific CryptoPunk', distinguishing it from sibling tool get_bids_for_merkle_root. It precisely defines what the tool retrieves.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'that include a specific CryptoPunk' provides clear context for when to use this tool (when targeting a particular punk by ID). However, it does not explicitly contrast with get_bids_for_merkle_root or state when to use that alternative instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_floor_priceARead-onlyInspect
Get the current floor ask price from the native CryptoPunks marketplace. Excludes restricted (onlySellTo) and zero-value listings. Returns totalActive (same denominator as get_listings). Rate limit: 10 per 10 min (read bucket — shared with browse_types, browse_traits, get_punk_details, get_listings, get_bids_for_punk, get_bids_for_merkle_root).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true. The description adds crucial behavioral context not in annotations: the filtering logic excluding restricted (onlySellTo) and zero-value listings, and specifies the data source ('native CryptoPunks marketplace').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence establishes core purpose; second sentence provides critical usage constraints. Perfectly front-loaded with no redundant fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter read operation with good annotations, the description is complete. It explains what value is returned and the critical caveats affecting that value (exclusions), which is sufficient without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, establishing baseline of 4. Description appropriately focuses on behavioral semantics (exclusion filters) rather than non-existent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specifies exact action ('Get'), resource ('current floor ask price'), and scope ('native CryptoPunks marketplace'). The exclusion criteria clearly distinguish it from sibling 'get_listings' which would return all listings including restricted ones.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context through exclusion criteria ('Excludes restricted... and zero-value listings'), implicitly guiding when to use alternatives. However, it does not explicitly name the sibling tool (e.g., 'use get_listings for restricted items') that should be used instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_listingsARead-onlyInspect
Get currently listed CryptoPunks from the native marketplace. By default returns only publicly buyable listings (excludes restricted onlySellTo and zero-value entries); pass includeRestricted: true to include private bundle sales. Optionally filter by price range. Returns totalActive (publicly buyable listings, same denominator as get_floor_price) and matchedCount (after your filters). Rate limit: 10 per 10 min (read bucket — shared with browse_types, browse_traits, get_punk_details, get_floor_price, get_bids_for_punk, get_bids_for_merkle_root).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 50) | |
| maxPriceEth | No | Maximum price in ETH | |
| minPriceEth | No | Minimum price in ETH | |
| includeRestricted | No | Include restricted (onlySellTo) and zero-value listings. Default false — matches get_floor_price. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations provide readOnlyHint=true, covering the safety profile. The description adds valuable scope context beyond the annotations: 'native marketplace' (data source limitation) and 'currently listed' (active listings only, excluding unlisted or bid-only items). However, it omits behavioral details like pagination behavior, rate limits, or response format since no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes core functionality and data source, second addresses optional capabilities. Every word earns its place. The structure is appropriately front-loaded with the essential action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (3 simple scalar parameters, 100% schema coverage) and clear read-only annotation, the description is sufficiently complete for tool selection. However, without an output schema, the description could have briefly characterized the return structure (e.g., 'returns listing data including price and punk ID') to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (limit, maxPriceEth, minPriceEth all well-documented), the baseline score is 3. The description acknowledges the optional price filtering capability ('Optionally filter by price range') but does not add semantic depth, syntax examples, or cross-parameter relationships beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('currently listed CryptoPunks from the native marketplace') that clearly distinguishes this from siblings like get_bids_for_punk (bids vs listings) and filter_punks (trait filtering vs marketplace listings). The scope limitation to 'native marketplace' is particularly helpful for distinguishing from aggregated listings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Optionally filter by price range' which implies when to use the optional parameters, but lacks explicit guidance on when to use this versus siblings like filter_punks (traits) or get_bids_for_punk (bids). No explicit when-not-to-use or alternatives are provided, though the use case is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_punk_detailsARead-onlyInspect
Get metadata for specific CryptoPunks: type, traits with rarity percentages, and permalink. Rate limit: 10 per 10 min (read bucket — shared with browse_types, browse_traits, get_listings, get_floor_price, get_bids_for_punk, get_bids_for_merkle_root).
| Name | Required | Description | Default |
|---|---|---|---|
| punkIds | Yes | Punk IDs to look up (1-100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true; description adds valuable return payload context (type, traits with rarity percentages, permalink) not captured in input schema or annotations. Does not mention rate limits or auth, but adequately describes what the tool returns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, front-loaded sentence with zero waste. Every clause adds value: action (Get), target (metadata), and specific return fields (type, traits, rarity, permalink).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only lookup with 1 parameter and no output schema, the description adequately covers the tool's purpose and return structure. Could mention the batch lookup capability (1-100 IDs), but sufficient for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with punkIds fully documented. Description mentions 'specific CryptoPunks' which aligns with the parameter but does not add syntax details or validation constraints (e.g., 1-100 range) beyond what the schema already provides. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' + resource 'metadata for specific CryptoPunks' with detailed output specification (type, traits, rarity, permalink). Clearly distinguishes from sibling tools like filter_punks (filtering) and browse_traits (browsing) by emphasizing retrieval of specific punk details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context through 'specific CryptoPunks,' suggesting targeted ID-based lookup, but lacks explicit when-to-use guidance or named alternatives (e.g., does not mention when to use filter_punks vs this tool).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_merkle_rootARead-onlyInspect
Reverse-engineer a Merkle root back to its punk IDs and inferred trait selection. ONLY works for roots that already have at least one bid in the CryptoPunks Bids API — this tool looks bids up by root, then derives trait config from the resulting punk set. Returns resolved: false for unknown roots; constructing a root locally and passing it here will not work. Rate limit: 5 per 10 min (compute bucket — shared with filter_punks, compute_merkle_root).
| Name | Required | Description | Default |
|---|---|---|---|
| merkleRoot | Yes | Merkle root to resolve (0x-prefixed, 32 bytes hex) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While the annotation declares readOnlyHint=true, the description adds valuable behavioral context: it reveals the implementation mechanism (looks up bids matching the root) and discloses the inferred nature of the trait selection ('inferred trait selection'). It also compensates for the missing output schema by indicating the return data includes punk IDs and trait types.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two highly efficient sentences with zero waste. The first sentence front-loads the core purpose (reverse-engineering to punk IDs and traits), while the second provides essential implementation details (bid lookup, derivation logic) that explain how the tool achieves its goal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description effectively compensates by specifying what the tool returns (punk IDs, inferred types and traits). It explains the lookup mechanism (via bids) which is crucial for understanding the tool's data source. A score of 5 would require more detail on error cases or return structure specifics, but the coverage is strong for a single-parameter lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'merkleRoot' parameter, the schema fully documents the input requirements (0x-prefixed, 32 bytes hex). The description references the Merkle root but does not add additional semantic context, examples, or format guidance beyond what the schema already provides, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Reverse-engineer', 'Looks up', 'derives') and clearly identifies the resource (Merkle root, punk IDs, trait selection). It distinctly positions the tool as the inverse of sibling 'compute_merkle_root' through the 'reverse-engineer' framing, and differentiates from 'get_bids_for_merkle_root' by specifying it derives the original punk set rather than just returning bids.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool (when you have a Merkle root and need to recover the underlying punk IDs and traits). However, it does not explicitly state when NOT to use it or name specific alternatives like 'compute_merkle_root' for the forward operation, though the inverse relationship is implied by 'reverse-engineer'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!