Solscan
Server Details
Solscan MCP — Solana block-explorer API (Pro v2)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-solscan
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 16 of 16 tools scored. Lowest: 3.4/5.
The server mixes Solana blockchain tools with a large set of Pipeworx general-purpose query tools (ask_pipeworx, compare_entities, etc.). This creates ambiguity about when to use which, as ask_pipeworx can potentially answer many of the same questions that specific tools like entity_profile or validate_claim target.
Tool names follow no single pattern: some use verb_noun snake_case (get_account_detail), others are verb phrases (ask_pipeworx), single words (forget), or compound (pipeworx_feedback). This inconsistency makes it harder for agents to predict tool names.
With 16 tools, the count is on the higher side but not excessive. However, the scope is unclear: the server is named Solscan but most tools are for generic data, making the count feel inflated for a blockchain scanner.
The Solana tools cover basic account, token, and transaction queries but lack write operations (e.g., send transaction) and more advanced Solana features. The Pipeworx side is broad but has overlapping tools (ask_pipeworx vs. compare_entities vs. entity_profile), indicating redundancy rather than completeness.
Available Tools
16 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description does not detail side effects, latency, or access limitations. It explains the routing behavior but could be more transparent about potential delays or external dependencies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loads the action, and uses bullet-like examples. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lists numerous data sources and examples, making it clear what the tool can handle. No output schema exists, but the tool returns answers naturally. Minor gap: no mention of output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'question', which is well-described. The description adds example queries but does not significantly enhance the schema's meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool answers natural-language questions by selecting the right data source, and distinguishes it from sibling tools like compare_entities or get_token_holdings which are specific. Examples solidify the purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use it (e.g., 'What is X?', 'Look up Y') and that it avoids needing to pick a specific tool. It could be improved by advising against use when a sibling tool directly answers the query, but overall provides clear context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so description carries full burden. Discloses data sources (SEC EDGAR/XBRL for companies, FAERS/FDA/clinicaltrials for drugs) and output format (paired data + citation URIs). Implies read-only, no side effects mentioned but likely safe.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences: first defines scope and trigger, second explains data per type and output. No redundancy, well-organized, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 simple params and no output schema, description covers key aspects: what to expect, data sources, and citation links. Minor omission: no mention of error handling for invalid inputs, but overall sufficient for agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds substantial meaning beyond 100% covered schema: explains what each type yields (financials vs. adverse events, etc.), gives concrete examples (tickers like AAPL, drug names like ozempic), and introduces data sources not in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Verb 'compare' with specific resource 'companies or drugs' and scope '2-5' clearly states purpose. Distinguishes from siblings by claiming it replaces 8-15 sequential calls (vs. entity_profile for single entities). No ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit triggers: 'compare X and Y', 'X vs Y', 'stack up', 'which is bigger'. Breaks down when to use each type. However, no explicit when-not-to-use or alternatives beyond the implied efficiency comparison.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully carries the burden of behavioral disclosure. It states the tool returns top-N relevant tools without side effects or mutations. However, it could be improved by explicitly noting it is a read-only operation, but the absence of any destructive hint and the nature of 'discover' imply safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficient and front-loaded with the core function, but the long list of example domains introduces some bloat. Still, every sentence serves a purpose and the structure is clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only 2 parameters and no output schema, the description is complete: it explains the return value (top-N tools with names and descriptions) and provides necessary context for an agent to decide when to invoke it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Both parameters are described in the schema, and the description adds value by specifying default and maximum for 'limit' and providing an example for 'query'. The schema coverage is 100%, so the description supplements rather than replaces schema info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Find tools by describing the data or task' and 'Returns the top-N most relevant tools with names + descriptions.' It clearly distinguishes from sibling tools that focus on specific entities or actions, making it the go-to for discovery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Use when you need to browse, search, look up, or discover what tools exist for...' and explicitly advises to 'Call this FIRST when you have many tools available and want to see the option set.' It effectively tells the agent when to use this tool vs. alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It fully explains the tool's behavior: it returns aggregated data from multiple sources, requires a ticker or CIK, and provides citation URIs. There is no mention of destructive actions or auth needs, which is appropriate for a read-only profile tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise yet comprehensive. It front-loads the core purpose in the first sentence, then expands with usage triggers, output list, and input constraints. Every sentence adds value without redundancy. The length is appropriate for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregating data from multiple sources), the description is complete. It covers the output categories (filings, fundamentals, patents, news, LEI), input requirements, and usage context. No output schema exists, but the description adequately details what the agent can expect in return.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers both parameters (type and value) with descriptions. The tool description adds significant semantics beyond the schema: it clarifies that 'type' only supports 'company' (others coming soon), that 'value' can be a ticker or zero-padded CIK, and explicitly states that names are not supported, pointing to resolve_entity as an alternative. This enriches the schema information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get everything about a company in one call.' It enumerates specific user queries that trigger its use and lists the types of data returned (SEC filings, fundamentals, patents, news, LEI). This distinguishes it from sibling tools that might provide more granular data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use scenarios: when a user asks for company profiles, research, or briefings. It also specifies when not to use it (e.g., if only a name is available, suggesting resolve_entity first). This guides the agent to choose the correct tool in the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the burden. It states it deletes a memory, implying a destructive operation, but does not disclose irreversibility, required permissions, or behavior when key does not exist. Basic transparency but could be more thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose, second provides usage guidance. No fluff, front-loaded, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers the main usage and context. Could mention return values or error handling, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes the 'key' parameter. The description adds 'by key' which matches the schema but does not provide additional semantics beyond what is in the input schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Delete a previously stored memory by key,' specifying the action (delete) and resource (memory by key). It distinguishes itself from siblings 'remember' and 'recall' by name, making its purpose distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use scenarios: 'when context is stale, the task is done, or you want to clear sensitive data.' It also suggests pairing with 'remember' and 'recall', offering context for alternatives. However, it lacks when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_account_detailAInspect
Overview of a Solana account: SOL balance (lamports + UI), owner program, executable flag, rent epoch.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Solana public key (base58) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present. The description lists returned fields but does not disclose behavioral traits such as data source freshness, caching, or side effects. For a read-only tool, this is minimal but not misleading.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that conveys the essential purpose without extraneous words. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity and absence of output schema or annotations, the description sufficiently outlines the returned fields. However, it lacks an example or further behavioral context for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the one parameter (address) is well-described in the schema. The description adds no additional meaning beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the tool provides an overview of a Solana account and lists exact fields (SOL balance, owner, executable flag, rent epoch). It clearly distinguishes from sibling tools like get_token_holdings or get_transaction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for basic account info but does not explicitly state when to use this tool versus alternatives like entity_profile or get_token_meta. No direct guidance provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_token_holdingsAInspect
SPL-token balances held by a Solana account. Returns mint, symbol, amount, decimals, USD value (if known).
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | 1-based page (default 1) | |
| address | Yes | Solana account public key | |
| page_size | No | 1-40 (default 20) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so the description carries the burden. It mentions return fields but does not disclose behavioral traits like pagination, rate limits, or that it is a read-only operation. The pagination parameters are in the schema but not highlighted in the description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with 15 words, front-loading the core action and output. Every word adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description partially covers return fields but omits details like the response structure (list vs. object), pagination behavior, or error handling. The page and page_size parameters hint at pagination, but this is not explained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with parameter descriptions, so the baseline is 3. The description adds context about the tool returning token balances but does not elaborate on the meaning of page or page_size beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving SPL-token balances for a Solana account. It specifies the output fields (mint, symbol, amount, etc.), which distinguishes it from sibling tools like get_token_meta or get_account_detail.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool should be used when token holdings data is needed for a given account, but it does not explicitly state when to avoid this tool or recommend alternatives. Sibling tools exist (e.g., get_token_meta), but no guidance on when to use one over the other is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_token_metaBInspect
Metadata for an SPL token mint: name, symbol, decimals, supply, icon, market cap, holders, social links.
| Name | Required | Description | Default |
|---|---|---|---|
| token_address | Yes | SPL token mint address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of disclosing behavioral traits. It does not mention idempotency, caching, rate limits, or any side effects beyond the implied read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that efficiently conveys the tool's purpose and the data it returns. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (single required parameter, no output schema), the description lists the return fields sufficiently. However, it omits error handling or edge cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the single parameter 'token_address' described as 'SPL token mint address'. The description adds no additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as retrieving metadata for an SPL token mint, listing specific fields (name, symbol, decimals, etc.). This distinguishes it from sibling tools like get_account_detail or get_token_holdings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It does not mention prerequisites, limitations, or conditions under which to prefer other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transactionAInspect
Transaction detail by signature: status, slot, block time, fee, balance changes, parsed instructions.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Solana transaction signature (base58) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It lists returned fields but does not disclose whether the operation is read-only, requires authentication, or has rate limits. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the purpose, no redundant words. Efficiently communicates the tool's function and output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description provides a useful list of returned fields, adequately covering the tool's output. However, it could explicitly mention the return format (e.g., JSON object).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not add meaning beyond the input schema: schema already describes the 'signature' parameter as 'Solana transaction signature (base58)'. Baseline 3 applies since schema coverage is 100%.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns transaction details by signature, listing specific fields (status, slot, block time, fee, balance changes, parsed instructions). It distinguishes from sibling tools like get_account_detail or get_token_holdings, which target different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a transaction signature is available and details are needed, but does not explicitly state when not to use it or mention alternative tools for related queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_transfersAInspect
Recent SOL and SPL token transfers for an account. Returns signature, timestamp, side (sent/received), token, amount, counterparty.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | 1-based page (default 1) | |
| address | Yes | Solana account public key | |
| page_size | No | 1-40 (default 20) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It lists all return fields (signature, timestamp, side, token, amount, counterparty) and indicates 'recent' scope. However, it does not specify the recency window (e.g., last 24 hours) or ordering.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and return fields. No extraneous words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 params, no output schema) and sibling context, the description covers purpose and output fields. Minor missing info: ordering, exact recency timeframe, and that results are paginated (implied by pagination params). Nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (all three parameters have descriptions). The description adds no additional meaning beyond the schema (e.g., it doesn't clarify 'page' is 1-based or that 'address' is a public key). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'list transfers for an account' with specific return fields (signature, timestamp, side, token, amount, counterparty). It distinguishes from siblings like get_transaction (single transaction) and get_token_holdings (holdings vs transfers).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for recent transfers but does not explicitly state when to use this tool vs alternatives, nor does it provide when-not-to-use or exclusion criteria. No sibling differentiation guidance is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavior: rate-limited (5 per identifier per day), free, doesn't count against quota, team reads digests daily, and notes the expected format. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is detailed but well-structured, front-loading purpose then usage then details. Every sentence is informative, though slightly longer than minimal. Still concise given the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters (one nested), no output schema, and no annotations, the description covers all essential aspects: purpose, usage, limitations, format, and behavioral constraints. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining the 'type' enum in user-friendly terms and specifying message length limits (2000 chars, 1-2 sentences). 'context' gets a brief mention, but the schema already covers it well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to submit feedback about bugs, missing features, data gaps, or praise. It provides specific verb+resource ('Tell the Pipeworx team') and distinguishes from siblings since no other tool handles feedback.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use each feedback type (bug, feature/data_gap, praise) and what to avoid ('don't paste the end-user's prompt'). Also mentions rate limits and that it's free, giving clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description fully discloses behavioral traits: it retrieves previously stored values, optionally lists keys when omitting the argument, is scoped to an identifier, and references companion tools for saving/deleting. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise yet informative, with the main action front-loaded. Every sentence adds value: retrieval behavior, usage examples, scope, and pairing instructions. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one optional parameter and no output schema, the description is fully complete. It covers purpose, usage, scope, and relationships, leaving no ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single 'key' parameter, so baseline is 3. The description adds context by explaining the dual behavior (retrieve vs. list) and the scope, providing extra value beyond the schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Retrieve a value previously saved via remember, or list all saved keys', specifying the exact verb-resource relationship. It distinguishes the tool from siblings like remember and forget by mentioning them directly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance on when to use the tool: 'look up context the agent stored earlier', with concrete examples like user's target ticker, address, research notes. It also notes the scope (identifier-based) and pairing with remember and forget, providing clear alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses fan-out to three parallel sources, accepted date formats, and return structure (changes, count, URIs). It does not mention rate limits or auth needs, but for a read tool this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise yet informative, front-loading the purpose, then usage examples, then behavior, then parameter details, then return type. Every sentence adds value with no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers use cases, data sources, parameter formats, and return structure. It lacks error handling or performance considerations (e.g., parallel fan-out may be slow), but is otherwise complete for a query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the schema descriptions are already detailed (e.g., date formats, ticker formats). The description adds no new meaning for parameters—it only repeats what's in the schema. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool answers 'What's new with a company' and provides multiple user query examples, making purpose unmistakable. It distinguishes itself from sibling tools like entity_profile or compare_entities by focusing on recency and changes over time.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool via query patterns (e.g., 'what's happening with X?', 'brief me on ...') and lists the data sources fanned out. However, it does not provide explicit when-not-to-use guidance or alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key-value storage, identifier scoping, and persistence duration. No annotations provided, so description carries full burden. Lacks mention of overwrite semantics or rate limits, but still informative.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise paragraph, each sentence adds value: purpose, usage, storage details, sibling pairing. No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately covers all aspects: what, when, how, with whom. Complete for a simple memory tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both parameters with descriptions. Adds value with example keys and clarifies value accepts any text. Does not elaborate on required vs optional beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it saves data for reuse across conversations/sessions. Distinguishes from sibling tools recall and forget by explicitly mentioning pairing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use: when discovering worth-carrying-forward information. Explains scoping, persistence based on authentication, and pairs with recall and forget.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description effectively communicates that the tool is a read-only lookup returning IDs and citation URIs. It does not disclose error handling or rate limits, but the core behavior is transparent given the simple nature of the tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (approximately 70 words) and front-loaded with the primary action. Every sentence contributes meaningful information without redundancy, achieving a high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple two-parameter schema and lack of output schema or annotations, the description provides sufficient context: purpose, usage timing, return content. It could mention edge cases (e.g., no match found) but remains largely complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by explaining the identifier systems and providing concrete examples (e.g., 'Apple → AAPL / CIK 0000320193'), which goes beyond the basic enum and string descriptions in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Look up the canonical/official identifier for a company or drug.' It specifies the output identifiers (CIK, ticker, RxCUI, LEI) and distinguishes itself from siblings by emphasizing its role as a prerequisite lookup for other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use this BEFORE calling other tools that need official identifiers.' It also gives examples and notes efficiency ('Replaces 2–3 lookup calls'), but lacks explicit when-not or alternative tool exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so description bears full burden. Discloses supported claim domain (company-financial via SEC EDGAR/XBRL), return values (verdict, structured form, actual value with citation), and replaces multiple steps. Does not cover auth, rate limits, or errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One dense paragraph with no wasted words. Efficiently covers action, usage, supported claims, return values, and rationale for existence. Well front-loaded with the primary verbs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given single parameter and no output schema, description is highly complete. Explains verdict types, structured form, citation scheme, and domain limitations. Leaves little ambiguity for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Single parameter `claim` with 100% schema coverage. Description adds value with examples of natural-language claims (e.g., "Apple's FY2024 revenue was $400 billion") and explains the format beyond the schema's bare definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool's purpose: fact-check, verify, validate a claim against authoritative sources. Provides specific verbs and examples of natural-language claims, distinguishing it from siblings like ask_pipeworx or compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (checking truth of user statements) with example phrases. Notes supported claim types (company-financial) and efficiency gains. Lacks explicit when-not-to-use or alternatives, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!