Skip to main content
Glama

get_nft_listings

Read-onlyIdempotent

Retrieve active NFT listings for an EVM collection, sorted floor-ascending, with tokenId, price, marketplace, and seller details. Use for research and candidate selection.

Instructions

Issue #569. Ranked individual listings (currently active asks) for a single EVM NFT collection on a single chain, sorted floor-ascending. Distinct from get_nft_collection, which exposes only collection-level metadata (floor / volume / holders) and so cannot ground a 'show me the N cheapest' question. Source: Reservoir /orders/asks/v5?status=active&sortBy=price&sortDirection=asc. Returns rows with tokenId, priceEth / priceUsd, priceCurrency, listingSource (marketplace domain — opensea.io / blur.io / x2y2.io / etc.), makerAddress (seller), validUntil (expiry), and orderKind (seaport-v1.6 / blur / etc.). Page size schema-capped at 10 (default 5) — small enough that the agent can validate every referenced row exists in the response. Single-token criteria only; collection-bid criteria orders are filtered out so every row names a concrete tokenId. SCOPE: read-only display tool. VaultPilot does NOT yet expose an NFT-buy preparation flow — Seaport / blur / x2y2 marketplace fills require EIP-712 typed-data signing, gated on the typed-data clear-sign defenses tracked at #453. Use these rows for research / candidate selection; execute any actual buy via the listing's marketplace UI (listingSource field) until the prepare flow lands. AGENT BEHAVIOR: do NOT extrapolate beyond rows.length. Validate that any rows[i] referenced in the answer actually exists in this response. The small page cap is the fabrication-resistance guard called out in #569. EVM-only in v1; Solana NFT marketplaces (Magic Eden / Tensor) deferred. Read-only.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contractAddressYesEVM contract address of the NFT collection.
chainNoEVM chain the collection is deployed on. Defaults to ethereum.ethereum
limitNoMax ranked listings to return (cheapest-first). Capped at 10 — small enough that the agent can validate every row index against the response before referencing it. The issue (#569) explicitly calls out the small cap as part of the fabrication-resistance defense.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, but the description adds substantial behavioral context: data source (Reservoir API), page size cap (10, default 5) with rationale for fabrication resistance, single-token criteria (no collection bids), and the limitation that buying is not yet supported. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is quite long but well-structured: purpose, distinction, source, fields, scope, agent behavior, limitations. Every sentence adds useful information. Could be slightly more concise, but the structure compensates.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description provides detailed return fields (tokenId, priceEth, etc.) and constraints (EVM-only, single-collection, single-chain, no buys). It adequately covers what the agent needs to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage, so baseline is 3. The description adds value by explaining the limit parameter's purpose ('small enough that the agent can validate every row index against the response') and referencing the issue for fabrication resistance. For contractAddress and chain, the schema already provides clear patterns and enums.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns ranked active listings for an EVM NFT collection, sorted by price ascending. It distinguishes from sibling get_nft_collection by specifying that tool only gives collection-level metadata, while this one provides individual token-level listings. The verb 'get' and resource 'nft_listings' are precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool vs get_nft_collection: 'Distinct from ... which exposes only collection-level metadata ... so cannot ground a 'show me the N cheapest' question.' It also advises using the data for research and to execute buys via the marketplace UI until the prepare flow lands. It notes EVM-only and Solana deferred, setting clear boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/szhygulin/vaultpilot-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server