Pubchem
Server Details
PubChem MCP — NIH chemistry compound database (no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-pubchem
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 15 of 15 tools scored.
The tools are generally well-described, but there is potential overlap between the general-purpose ask_pipeworx and specific tools like compare_entities and validate_claim. The mix of PubChem and Pipeworx domains adds ambiguity about when to use which set.
Tool names follow inconsistent conventions: some use 'get_' prefix (get_compound, get_synonyms), others are verb phrases (discover_tools, validate_claim). CamelCase is absent but naming patterns are not uniform, e.g., 'search_by_name' vs 'entity_profile'.
15 tools is a reasonable number, but the server spans two distinct domains (PubChem and Pipeworx), which makes it slightly broader than ideal. Still within the acceptable range.
For PubChem, basic compound retrieval is covered, but advanced queries (e.g., substructure search) are missing. Pipeworx tools provide broad data access but lack update/delete operations. The combination leaves gaps in each domain.
Available Tools
15 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It explains that Pipeworx picks the right tool, fills arguments, and returns results, and lists many data sources. However, it does not disclose behavioral traits such as rate limits, authentication needs, or what happens if the question is ambiguous or no source matches. This is a moderate disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat lengthy but well-structured, starting with the main purpose, then usage guidance, followed by examples. It is front-loaded and every sentence adds value, though a slightly more concise version could improve readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has a single parameter, no output schema, and no annotations, the description covers the purpose, usage, and examples thoroughly. It could mention failure modes or limitations, but overall it is sufficiently complete for an AI agent to understand the tool's capabilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a clear description for the single 'question' parameter. The description adds value by providing context and examples of valid questions, going beyond the schema's 'natural language' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that this tool answers natural-language questions by automatically selecting the appropriate data source. It provides specific verb-resource combinations like 'What is X?', 'Look up Y', and 'Find Z', and distinguishes itself from sibling tools (e.g., compare_entities, discover_tools) which are more specialized.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when a user asks...' and provides clear examples of query types. It does not explicitly state when not to use it or suggest alternatives, but the context implies it is the go-to tool for general questions, making the guidance adequate but not exhaustive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses data sources (SEC EDGAR/XBRL for companies; FAERS, FDA, trials for drugs) and return type (paired data + citation URIs). It does not mention if the tool is read-only or any side effects, but the context implies no mutation. No contradictions with annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (5 sentences) with no wasted words. It front-loads the core purpose and usage, then provides details on types and what data to expect. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (two entity types, multiple data points) and the absence of an output schema, the description covers most necessary context: supported types, data sources, and use cases. However, it does not explain the return format in detail or pagination/citations beyond URIs, leaving minor ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already has 100% description coverage, providing baseline 3. The description adds meaningful context by explaining what data is pulled for each 'type' (e.g., revenue, net income for companies; adverse events for drugs) and the format for 'values' (tickers vs drug names). This goes beyond the schema's minimal descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'compare' and the resource '2–5 companies (or drugs)', making the tool's purpose immediately obvious. It distinguishes itself from sibling tools (e.g., search, profile) by explicitly noting it replaces 8–15 sequential agent calls, so the agent knows when to use it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage triggers like 'compare X and Y', 'X vs Y', and specific use cases (revenue, adverse events). However, it does not mention when NOT to use this tool (e.g., if the user wants a single entity or more than 5 items), slightly lowering the score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states that the tool 'Returns the top-N most relevant tools with names + descriptions.' This is useful but lacks details on how relevance is determined, whether results are sorted, or if there are limitations (e.g., only top-20 default). Some behavioral transparency is present, but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise given its scope, with most sentences adding value. It front-loads the core action ('Find tools by describing the data or task') and then provides examples and usage guidance. It could be slightly trimmed, but it's well-structured and not overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a discovery tool with many siblings, the description is fairly complete: it explains what it does, when to use it, and what to expect in return (top-N relevant tools with names and descriptions). There is no output schema, but the description suffices. It could mention pagination or result format, but overall it covers the essential context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for both parameters (query and limit). The description adds context by providing example queries for the 'query' parameter, but this largely supplements rather than adds new meaning. The description does not elaborate on the 'limit' parameter beyond what the schema says. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to find tools by describing data or task. It lists numerous domains (SEC filings, financials, FDA drugs, etc.) and includes a clear verb+resource structure ('browse, search, look up, or discover what tools exist'). It distinguishes itself from sibling tools (which are mostly entity lookups or feedback) by being a discovery tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' This provides direct guidance on when to use the tool. While it doesn't explicitly state when not to use it, the context makes it clear that this is for exploration rather than targeted lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description fully discloses the return data (SEC filings, fundamentals, patents, news, LEI, citation URIs). It implies a read-only operation, though not explicitly stated. No side effects mentioned but context suggests safe read.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with purpose first, then usage examples, then output details. Slightly verbose but packs necessary information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description enumerates what is returned. Does not mention pagination or limitations, but given the compound nature of the tool, it provides sufficient context for an agent to understand its capabilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and descriptions are present. The tool description adds value by explaining the 'type' enum only supports 'company' and clarifying that 'value' accepts ticker or CIK, not names, with a reference to resolve_entity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get everything about a company in one call', specifying the verb and resource. It distinguishes from siblings by mentioning the alternative 'resolve_entity' for name resolution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists example user queries and when to use this tool (e.g., 'tell me about X') and when not to (e.g., using resolve_entity for names). Provides clear context for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must reveal behavior. It states 'delete,' implying a destructive action, but lacks details on irreversibility, error handling, or permissions. Adequate for a simple operation but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that front-load the purpose and immediately provide usage guidance and pairing. No redundant words; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (one param, no output schema, no annotations), the description covers purpose, usage, and sibling relationships. Missing details on return values or confirmation, but it is sufficient for a basic delete tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of parameters with a description for 'key.' The description only echoes 'by key,' adding no extra meaning beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Delete a previously stored memory by key,' specifying the verb 'delete' and the resource 'memory.' It distinguishes from sibling tools 'remember' and 'recall' by focusing on deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists conditions for use: 'when context is stale, the task is done, or you want to clear sensitive data.' Also suggests pairing with 'remember' and 'recall,' providing clear alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_classificationAInspect
Pharmacological classification tree for a CID — drug class, mechanism of action, biological role tags. Useful for "what kind of drug is X" questions.
| Name | Required | Description | Default |
|---|---|---|---|
| cid | Yes | PubChem Compound ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not disclose behavioral traits beyond the purpose. With no annotations provided, the agent is left uninformed about potential constraints, permissions, or side effects. For a retrieval tool, it would benefit from stating it is read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with a usage hint, front-loading the key information without extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (single parameter, no output schema), the description is mostly complete. It could mention the format of the returned classification tree, but it is adequate for an agent to understand the tool's purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description only adds minor context ('for a CID'). The parameter is well-documented in the schema, so the description adds marginal value, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns a pharmacological classification tree for a CID, listing drug class, mechanism, and biological role tags. It distinguishes from sibling tools like get_compound by focusing on classification rather than generic compound data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions it is useful for 'what kind of drug is X' questions, providing clear context. However, it does not mention when not to use it or alternatives (e.g., get_compound for detailed properties).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_compoundAInspect
Get standard properties for a PubChem CID: molecular formula, molecular weight, IUPAC name, canonical SMILES, isomeric SMILES, InChI, InChIKey, exact mass, charge, complexity, H-bond donors/acceptors, rotatable bonds, TPSA, XLogP, heavy atom count.
| Name | Required | Description | Default |
|---|---|---|---|
| cid | Yes | PubChem Compound ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It transparently lists the returned properties and implies a read-only operation. However, it does not disclose potential errors, authentication needs, or rate limits, but for a simple get operation, this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence listing many properties, which is relatively concise but slightly long. It could be broken into bullet points for better readability, but it is front-loaded with the purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one parameter and no output schema, the description comprehensively covers the output by listing numerous properties. It does not mention errors or pagination, but for a simple retrieval tool, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with 'cid' well-described as PubChem Compound ID. The description adds context by listing output properties, which indirectly clarifies the parameter's role, but does not add new semantic information beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves standard properties for a PubChem CID and lists specific properties like molecular formula, weight, IUPAC name, etc. It distinguishes from siblings like get_classification and get_synonyms by focusing on compound-specific data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving compound properties but provides no guidance on when to use this tool over siblings. It does not mention alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_synonymsAInspect
List all names (synonyms) for a CID — common names, IUPAC, trade names, CAS numbers, etc.
| Name | Required | Description | Default |
|---|---|---|---|
| cid | Yes | PubChem Compound ID | |
| max_results | No | Cap (default 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden. It lists types of names but does not disclose behavior such as result limits (max_results is described only in schema), pagination, rate limits, or whether the list is exhaustive. The description adds value but leaves gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that efficiently conveys the tool's purpose. Every word earns its place, with no unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should clarify the return format (e.g., list of strings). It does not explain how results are structured or if max_results affects output. The description is adequate for a simple list but lacks completeness for an agent to anticipate output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema: cid is already described as 'PubChem Compound ID' in schema, and max_results is only described in schema as 'Cap (default 50)'. The description does not elaborate on parameter behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all names (synonyms) for a CID, with specific examples like common names, IUPAC, trade names, and CAS numbers. It distinguishes itself from sibling tools, none of which focus on synonyms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for retrieving synonyms but does not specify when to use this tool versus alternatives. No explicit guidance on when not to use it or prerequisites like CID availability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description fully covers behavioral traits: rate-limited to 5 per identifier per day, doesn't count against quota, free, and that the team reads digests daily and feedback affects roadmap. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single coherent paragraph that front-loads the core purpose and includes all necessary details. Although informative, it could be slightly more structured with bullet points for readability, but it remains efficient and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a feedback tool with 3 parameters, nested object, and enum, the description covers purpose, usage guidelines, behavioral transparency, and parameter semantics comprehensively. It leaves no gaps for an agent to misuse the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds significant meaning beyond enum values by explaining when each 'type' applies (e.g., 'bug = something broke or returned wrong data'). It also clarifies the 'message' parameter should be specific about tools/errors and mentions 2000 char max.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool is for giving feedback (broken, missing, or needs to exist) to the Pipeworx team. It explicitly lists feedback types (bug, feature, data_gap, praise) and distinguishes itself from sibling tools like ask_pipeworx, discover_tools, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly says when to use: when a tool returns wrong/stale data, when a desired tool is missing, or for praise. It also provides when-not-to-use guidance (don't paste end-user prompt) and mentions rate limits and that it's free, guiding appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds scoping info (identifier-based) and omitting key lists all keys. Does not disclose error behavior for missing keys or return details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each providing essential information: action, use case, and scoping/pairing. No redundant or irrelevant content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description does not specify return format or error handling. Covers scoping and optional key behavior, but missing details on what is returned for list vs. single retrieval. Adequate for a simple tool but could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with a clear description for 'key' parameter. Tool description largely repeats the schema's behavior for omitting key. Adds no new parameter-level details beyond that.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves a saved value or lists all keys, with a specific verb (retrieve/list) and resource (saved values). Distinct from siblings 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises using to look up context saved earlier without re-deriving, and pairs with 'remember' and 'forget'. Lacks explicit 'when not to use' but is clear about scope and pairing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses fan-out to SEC EDGAR, GDELT, USPTO and return structure. No annotations provided, so description does the work, but omits potential limitations like rate limits or data latency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise, well-structured, front-loaded with purpose. Every sentence adds value with examples and details. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers the multi-source nature and return format. No output schema, so description compensates. Could mention aggregation or performance, but sufficient for a moderately complex tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers all 3 parameters with descriptions, so baseline is 3. Description adds little beyond repeating schema info (e.g., date formats). No new parameter meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves recent changes for a company, with specific example queries. However, it does not explicitly distinguish from sibling tools like entity_profile or compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides multiple example user queries and context for when to use. Lacks guidance on when not to use or explicit alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Given no annotations, the description discloses key behavioral traits: key-value storage scoped by identifier, persistence differences (authenticated vs anonymous 24-hour memory). Does not cover all edge cases but sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, front-loaded with purpose, then usage, technical details, and pairing. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple write tool with 2 required params and no output schema, the description covers use cases, storage details, lifetime, and technology pairing. Complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of parameters with descriptions. Description adds value by providing examples of key/value types and context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool saves data for reuse, with specific verb 'save' and resource 'data'. It distinguishes from siblings 'recall' and 'forget' by mentioning pairing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use 'when you discover something worth carrying forward' with examples. Mentions pairing with recall/forget but does not explicitly state when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return values (IDs plus pipeworx:// citation URIs). As a read-only lookup, it is adequately transparent, though it does not mention failure modes or limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences, well-structured, front-loaded with purpose, includes examples and usage guidance. No redundant or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, simple lookup), the description fully explains purpose, parameters, return values, and usage context. No output schema, but return values are described. Complete and self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, giving baseline 3. Description adds value by explaining the meaning of parameters with examples (e.g., 'Apple' → AAPL for company, 'ozempic' for drug) and clarifying legitimate inputs for value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool looks up canonical/official identifiers for companies or drugs, listing specific ID systems (CIK, ticker, RxCUI, LEI). Examples provide concrete demonstrations. Distinguishes itself by specifying it should be used before other tools needing official identifiers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when a user mentions a name and you need the CIK...' and 'Use this BEFORE calling other tools that need official identifiers.' Implies when-not-to-use, but does not mention sibling alternatives like entity_profile or search_by_name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_by_nameAInspect
Resolve a common chemical or drug name to PubChem CIDs. Use for "what's the CID of ibuprofen?" or to disambiguate. Returns CIDs (Compound IDs) matched to the name. Then use get_compound with the CID for properties.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Common, IUPAC, or brand name (e.g., "ibuprofen", "caffeine") | |
| max_results | No | Cap results (default 5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States it returns CIDs but doesn't disclose behavior beyond that (e.g., no side effects, auth needs, rate limits). Safe read operation is implied but not explicit. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: purpose, usage examples, and next step. No redundancy or fluff. Front-loaded with key action. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with full schema coverage and no output schema, description explains purpose and next steps. Minor gap: does not specify return format (e.g., list of CIDs or objects). Otherwise complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear parameter descriptions. Description adds that it returns CIDs matched to name, but does not add extra parameter-level semantics beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states purpose: resolve chemical/drug names to PubChem CIDs. Specific verb 'resolve' and resource 'PubChem CIDs' differentiate from siblings like get_compound (properties) and get_synonyms (synonyms). Examples reinforce understanding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use for "what's the CID of ibuprofen?" or to disambiguate' and advises subsequent use of get_compound. Provides clear context, though no explicit when-not cases are given; siblings imply alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It describes the return values (verdict, extracted form, actual value with citation, percent delta) and notes it replaces multiple sequential calls. No contradictions. Could add error handling details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with main action and usage, then provides details on scope, return types, and efficiency. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a single-param tool: explains purpose, domain, return structure, and efficiency. Missing explicit handling for out-of-scope claims or errors, but overall complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'claim' is well-described in the schema with examples. The description adds context about supported domain and format, enhancing understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it validates factual claims against authoritative sources, with specific examples like 'Is it true that...'. The tool is distinct from siblings such as ask_pipeworx or compare_entities, which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases (e.g., 'Verify the claim that...') and notes the v1 limitation to company-financial claims. Does not explicitly state when not to use, but the context implies domain restriction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!