Opensea
Server Details
OpenSea MCP — NFT marketplace data (v2 API)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-opensea
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 16 of 16 tools scored. Lowest: 2.9/5.
The server mixes OpenSea-specific tools (5) with a large set of generic Pipeworx tools (11) that overlap heavily in purpose. Tools like ask_pipeworx, entity_profile, compare_entities, and validate_claim all handle general data queries, making it hard for an agent to choose the correct one. The OpenSea tools are distinct but buried among ambiguously described utility functions.
Naming patterns are inconsistent. OpenSea tools follow 'verb_noun' (get_collection, list_owned_nfts), but Pipeworx tools include formats like 'ask_pipeworx', 'entity_profile', 'recent_changes', and 'pipeworx_feedback'. Snake_case mixed with multi-word names without clear convention reduces predictability.
With 16 tools, the count is slightly above the typical well-scoped range (3-15). More importantly, 11 of these tools are generic utilities unrelated to OpenSea, making the server feel bloated and misaligned with its stated purpose. The tool count itself is not excessive, but the scope is inappropriate for an OpenSea MCP server.
For an OpenSea server, the tool surface is severely incomplete. It only covers basic read operations (get collection, get stats, get NFT, list NFTs). Critical missing operations include creating listings, making offers, buying/selling, transferring NFTs, and user management. The Pipeworx tools do not fill these gaps, leaving agents unable to perform core NFT workflows.
Available Tools
16 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses automatic routing across many sources and that Pipeworx fills arguments and returns results. Does not cover limitations or failure modes, but adequate for the scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences plus a list of examples. Front-loaded with purpose, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given only one parameter and no output schema, the description covers usage, scope, and examples comprehensively. Suitable for an agent to understand and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has one parameter 'question' with brief description. The description adds significant value by providing example questions and explaining the breadth of sources, which helps the agent form appropriate queries.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it answers natural-language questions by routing to the appropriate data source, listing 300+ sources and giving concrete examples. It distinguishes itself from sibling tools that are more specific (e.g., compare_entities, entity_profile).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use: for natural-language queries like 'What is X?', and implies when not to use (if you want a specific tool). Mentions alternatives implicitly by saying it picks the right tool for you.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that it pulls financial data from SEC EDGAR/XBRL for companies and adverse event reports from FAERS for drugs, and returns paired data with citation URIs. No destructive behavior is implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that is front-loaded with the core purpose. It is concise but covers all key aspects without redundancy, though it could be slightly more structured with bullet points for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains return values (paired data + citation URIs). It covers inputs, usage contexts, and behavioral scope for a two-parameter tool, making it fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, but the description adds significant value by explaining the data sources for each type and providing example inputs, which clarifies beyond the schema's enums.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 companies or drugs side by side, with specific data points for each type. It distinguishes from sibling tools by emphasizing the comparative use case and replacing multiple sequential calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists when to use this tool, such as when the user says 'compare X and Y' or 'X vs Y'. It provides concrete examples but does not explicitly mention when not to use it or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It states it returns top-N relevant tools with names and descriptions, implying read-only behavior. However, it does not disclose any potential side effects, authorization needs, or limits beyond what is implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that efficiently communicates purpose, usage examples, and return format. It is slightly dense but not unnecessarily verbose, with clear front-loading of the main purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and no output schema, the description adequately covers the return format ('top-N most relevant tools with names + descriptions') and parameter details. It is complete enough for an agent to decide when and how to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the schema already describing both parameters (query and limit). The description adds context (e.g., 'Natural language description') but does not provide significant additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Find tools by describing the data or task', with specific verb and resource. It distinguishes from sibling tools like ask_pipeworx or entity_profile by focusing on discovery of tools rather than using a specific one.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this FIRST when you have many tools available and want to see the option set', providing clear context for when to use. It lists many example use cases, but lacks explicit 'when not to use' or alternative tool mentions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden for behavioral disclosure. It details the data returned (SEC filings, fundamentals, patents, news, LEI) and mentions citation URIs. However, it does not explicitly state that the operation is read-only or describe any side effects. The description is adequate but could be more explicit about the read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph with five sentences, each delivering essential information. It is front-loaded with the core purpose, followed by use cases, data returned, and input guidance. No unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of aggregating multiple data sources and the absence of an output schema, the description does a good job listing the categories of returned data. However, it does not describe the output structure (e.g., JSON format) or any limitations (e.g., number of recent filings). Slightly more detail would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds value by giving examples ('AAPL', '0000320193') and clarifying the 'type' parameter is limited to 'company' with future types planned. It also explains why 'value' cannot accept names and directs to resolve_entity. This is helpful beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get everything about a company in one call.' It lists specific data sources (SEC filings, revenue, patents, news, LEI) and explains that it is a composite tool replacing many others. It distinguishes itself from siblings like resolve_entity by specifying that names are not accepted and advising to use resolve_entity first.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use scenarios: 'when a user asks "tell me about X"... or you'd otherwise need to call 10+ pack tools.' It also gives negative guidance: 'Names not supported — use resolve_entity first if you only have a name.' This clearly differentiates from the sibling resolve_entity tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must fully disclose behavior. It implies a destructive action but does not explicitly state permanence, error handling for nonexistent keys, or required permissions. Adequate but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each serving a distinct purpose: stating the action, providing usage conditions, and suggesting tool combinations. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool with no output schema, the description covers purpose and usage but omits return behavior (e.g., success/failure confirmation) and error conditions. Adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with the sole parameter 'key' described as 'Memory key to delete' in the schema. The tool description adds no further semantic value, so baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Delete a previously stored memory by key', specifying a concrete verb and resource. Among siblings (remember, recall), it uniquely identifies the deletion action, making it easy to distinguish.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use scenarios: 'when context is stale, the task is done, or you want to clear sensitive data'. It also advises pairing with remember and recall, offering clear guidance on related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_collectionAInspect
Fetch a collection by OpenSea slug. Returns name, description, contracts, social links, fees, image, banner. Find slugs at opensea.io/collection/.
| Name | Required | Description | Default |
|---|---|---|---|
| collection_slug | Yes | OpenSea collection slug (URL suffix) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral traits such as error handling, permissions, rate limits, or what happens if the slug is invalid. For a read tool, minimal transparency is offered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with purpose and output fields, then guidance on slug format. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter, no output schema, and no annotations, the description provides essential purpose and slug source. It lists returned fields but lacks details on return structure or edge cases. However, it is sufficient for the tool's low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of parameters with a description for collection_slug. The description adds practical guidance ('Find slugs at opensea.io/collection/<slug>'), which helps the agent locate valid values beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Fetch a collection by OpenSea slug' with a specific verb and resource, and lists the returned fields (name, description, contracts, etc.), distinguishing it from sibling tools like get_collection_stats and get_nft.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving a single collection but does not explicitly mention when not to use this tool or compare with alternatives like get_collection_stats or list_collection_nfts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_collection_statsAInspect
Floor price, volume (24h/7d/30d/all), supply, owners, and best-offer for a collection.
| Name | Required | Description | Default |
|---|---|---|---|
| collection_slug | Yes | OpenSea collection slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description states what data is returned but omits details like authentication needs, rate limits, or precise meaning of 'best-offer'. Acceptable for a simple read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence listing all key metrics with no filler. Efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple stats tool with one parameter and no output schema, the description sufficiently informs the agent about what to expect. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter, and the description only repeats 'OpenSea collection slug' already in schema. No added meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly lists the metrics returned (floor price, volume, supply, owners, best-offer) for a collection, clearly distinguishing it from sibling tools like get_collection (collection metadata) and get_nft (individual NFT data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies it's for aggregate statistics, but no explicit guidance on when to use vs alternatives (e.g., get_collection for metadata). No 'when not to use' or alternative tools mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_nftBInspect
Single NFT by chain + contract address + token ID.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | Chain slug (ethereum, polygon, base, arbitrum, optimism, avalanche, klaytn, bsc, solana, etc.) | |
| token_id | Yes | Token ID (numeric string for ERC-721/1155) | |
| contract_address | Yes | Contract address (0x... for EVM) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It only indicates a read operation without addressing error handling, authentication, or response format. Minimal disclosure beyond basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, well-structured sentence that immediately conveys the core function. No extraneous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Acceptable for a simple lookup tool with fully described parameters, but lacks explanation of return values, error cases, or supported chain values beyond schema hints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema provides full 100% coverage with parameter descriptions. The description adds no additional nuance beyond summarizing the three parameters, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves a single NFT using three unique identifiers (chain, contract address, token ID), distinguishing it from sibling tools that list multiple NFTs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like list_owned_nfts or list_collection_nfts. Missing context on selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_collection_nftsAInspect
List NFTs inside a collection (paginated). Returns identifier, name, image, traits, owner, last sale.
| Name | Required | Description | Default |
|---|---|---|---|
| next | No | Pagination cursor from prior response | |
| limit | No | 1-200 (default 50) | |
| collection_slug | Yes | OpenSea collection slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description reveals pagination and return fields but omits any behavioral details such as rate limits, authentication requirements, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence with 14 words that front-loads the core purpose and includes key details without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description covers return fields but lacks guidance on pagination cursor usage. Overall adequate for a simple list tool with well-documented input schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers all parameters with 100% description coverage; the description adds no extra meaning beyond stating pagination, which is already implied by the 'next' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List NFTs inside a collection'), specifies pagination, and lists return fields, distinguishing it from sibling tools like 'list_owned_nfts' and 'get_nft'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching collection NFTs but does not provide explicit guidance on when to use this tool versus alternatives like searching by owner or by slug.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_owned_nftsCInspect
NFTs owned by an address on a specific chain. Paginated.
| Name | Required | Description | Default |
|---|---|---|---|
| next | No | Pagination cursor | |
| chain | Yes | Chain slug | |
| limit | No | 1-200 (default 50) | |
| address | Yes | Owner wallet address | |
| collection_slug | No | Restrict to a single collection (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. It only mentions pagination, but does not state whether the operation is read-only, if any side effects occur, or what the response structure looks like. With zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of two short sentences that front-load the key purpose and pagination aspect. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there is no output schema, the description should provide more context about return format, error handling, or default ordering. With 5 parameters and multiple sibling tools, the description is too brief to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents all parameters. The description adds no additional meaning beyond what is in the schema. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool lists NFTs owned by an address on a specific chain, and mentions pagination. However, it does not differentiate it from sibling tools like list_collection_nfts, but the verb+resource combination is specific enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as list_collection_nfts or get_nft. The description does not provide context for appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Although no annotations are provided, the description discloses key behavioral traits: rate-limited to 5 per identifier per day, free usage, and that feedback directly affects the roadmap. This compensates well for the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is 5 sentences, starting with the core purpose, then elaborating on usage and constraints. It is concise without sacrificing clarity, though slightly verbose in the middle section.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a feedback submission tool, the description covers all necessary context: what to submit, how to frame it, rate limits, and impact. The absence of an output schema is acceptable because the tool's outcome is straightforward.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of parameters with descriptions. The description adds value by reinforcing the meaning of the type enum and advising users to focus on tool/pack context rather than end-user prompts, which goes beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for submitting bugs, feature requests, data gaps, or praise to the Pipeworx team. It explicitly lists the feedback types and contrasts with sibling tools by indicating its unique purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use each feedback type (e.g., 'wrong/stale data' for bug, 'tool you wish existed' for feature). Additionally, instructs to describe issues in terms of Pipeworx tools/packs and not to paste user prompts, and mentions rate limits and quota impact.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses scoping by identifier and pairing with siblings. Minor gap: does not state behavior if key not found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each adding information: function, use case, and context. No fluff, well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema; description does not specify return format. But for a simple retrieval tool with one optional param, coverage is good. Missing output details slightly reduce completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Single parameter 'key' with 100% schema coverage. Description repeats schema info ('omit to list all keys') without adding new meaning. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a value saved via 'remember' or lists all keys when no argument is given. It uses specific verbs and distinguishes from sibling tools 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: to look up previously stored context without re-deriving. Contrasts with 'remember' (save) and 'forget' (delete), providing clear alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses key behaviors: parallel fan-out to multiple sources, return structure (structured changes, total_changes, citation URIs), and parameter options. Missing details on auth or rate limits, but acceptable for a read-only tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with four sentences, front-loading the purpose and use cases. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is complete for a tool with no output schema: it explains purpose, parameters, sources, and return structure. No critical gaps are apparent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers all 3 parameters with full descriptions. The description adds value by explaining 'since' accepts ISO dates or relative shorthand and 'value' can be ticker or CIK, enhancing understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'What's new with a company in the last N days/months?' and provides specific examples of user queries. It distinguishes itself by fanning out to multiple sources (SEC EDGAR, GDELT, USPTO) and includes a clear verb+resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes explicit usage examples like 'when a user asks 'what's happening with X?'' and covers the fan-out behavior. However, it does not explicitly contrast with sibling tools like 'entity_profile' or 'validate_claim', leaving some ambiguity about when to prefer this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key behaviors: key-value pair scoped by identifier, persistence differences between authenticated and anonymous sessions (persistent vs. 24-hour retention). No annotations are provided, so this description carries the full burden. However, it does not explicitly state whether storing a key overwrites the previous value, which would enhance transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded: purpose first, then usage guidance, then behavioral details. Every sentence adds value, with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is adequate for a simple store tool. It covers purpose, usage, behavior, and parameter semantics. However, it does not mention the return value or error conditions (e.g., what happens if key already exists). Given that there is no output schema, a brief note on return would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema provides 100% coverage with descriptions. The description adds context beyond schema by giving concrete examples of key patterns and clarifying that value can be any text, aiding the agent in constructing proper arguments.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool saves data for later reuse, with specific examples like 'resolved ticker' or 'target address'. It distinguishes itself from sibling tools 'recall' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly describes when to use the tool: 'when you discover something worth carrying forward'. Also mentions pairing with recall and forget, providing clear context and alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries the burden. It discloses return type (IDs plus citation URIs) and multi-entity lookup capability, but lacks details on error handling, failure behavior, or authentication requirements. Adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences, front-loaded with purpose, and every sentence adds necessary information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers return types, ID systems, and workflow positioning. It lacks edge-case handling details but provides sufficient context for an agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, yet the description adds substantial value by providing concrete examples (e.g., 'Apple' → AAPL), explaining valid input formats (ticker, CIK, name), and clarifying the ID systems returned for each entity type.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it resolves canonical/official identifiers (CIK, ticker, RxCUI, LEI) for companies and drugs, with examples. It distinguishes itself from sibling tools by noting it replaces 2-3 lookup calls and should be used before other tools requiring identifiers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use it (when user needs official identifiers) and positions it before other tools. It could have added explicit when-not-to-use scenarios, but the guidance is clear and actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden. It discloses the tool is a read operation (fact-checking), specifies data sources (SEC EDGAR+xBRL), and lists possible verdict types. It does not explicitly state it is non-destructive, but the nature of the operation implies no side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single well-organized paragraph of 5 sentences. It front-loads synonyms, then usage, scope, return values, and efficiency benefit. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given only one parameter and no output schema, the description fully explains the tool's purpose, scope, inputs, outputs (verdict types, extracted form, citation), and limitations. No critical information is missing for an agent to decide and use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema provides 100% coverage with a description of the 'claim' parameter, but the tool description adds significant context: it clarifies the claim should be a natural-language factual statement and gives concrete examples, enhancing understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool's function (fact-check/verify) and resource (natural-language factual claims), with detailed scope (company-financial claims for US public companies via SEC EDGAR+xBRL). It distinguishes itself from siblings like ask_pipeworx or entity_profile by specifying its specialized fact-checking role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly guides when to use: 'Use when an agent needs to check whether something a user said is true' and provides example phrasings. It also notes the current scope limitation (company-financial claims). While it doesn't explicitly state when not to use, the scope definition implicitly excludes non-financial claims.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!