nvd
Server Details
NVD MCP — wraps the NIST National Vulnerability Database API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-nvd
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 14 of 14 tools scored. Lowest: 2.9/5.
Significant overlap exists: ask_pipeworx can potentially answer questions that other tools are designed for, and tools like entity_profile, compare_entities, and recent_changes all deal with entity information. The CVE tools are distinct but still overlap with ask_pipeworx. The boundaries between the general query tools are unclear.
The naming is inconsistent: some tools use verb-noun patterns (ask_pipeworx, compare_entities, validate_claim), while others are noun phrases (entity_profile, recent_changes, recent_cves). There is no consistent convention for naming, mixing descriptive and imperative styles.
While 14 tools is within a reasonable range, the server name 'nvd' suggests a focus on vulnerability data, but only 3 tools are related to CVEs. The majority are about a general data query system (Pipeworx) and memory utilities, making the tool set poorly scoped for the server's apparent purpose.
For the CVE domain, the tools cover basic lookup and search but lack features like batch updates or alerts. The Pipeworx tools seem comprehensive for their domain, but as a whole the surface is incomplete because the server bundles two unrelated domains without clear integration, leaving gaps in both.
Available Tools
14 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that Pipeworx 'picks the right tool, fills the arguments, and returns the result,' which adds useful behavioral context about automation. However, it doesn't cover potential limitations like rate limits, error handling, or data source reliability, leaving gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the core purpose stated first, followed by behavioral details and concrete examples. Every sentence earns its place by clarifying functionality without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (natural language querying with backend automation), no annotations, and no output schema, the description does a good job explaining the core behavior and usage. It covers the input approach and process flow but lacks details on output format or error cases, which could be improved for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'question' well-documented in the schema. The description adds minimal value beyond the schema by emphasizing 'plain English' and 'natural language' in the examples, but doesn't provide additional syntax or format details. Baseline 3 is appropriate given the high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer from data source'), and distinguishes from siblings by emphasizing natural language querying without needing to browse tools or learn schemas. The examples further clarify the scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: for asking questions in plain English to get answers from data sources, without needing to browse other tools. It implicitly distinguishes from siblings like 'get_cve' or 'search_cves' by not requiring specific tool knowledge. However, it lacks explicit when-not-to-use guidance or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It mentions returns (paired data, resource URIs) but does not state whether the tool is read-only, what happens on invalid input, rate limits, or authentication needs. The behavioral context is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences, each adding distinct value: purpose, data details, and efficiency benefit. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no nested objects, no output schema), the description covers what the tool does, its inputs, outputs (paired data + URIs), and constraints (2-5 entities). It is complete for an agent to use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline is 3. The description adds value by explaining the meaning of the 'type' enum values and the 'values' array with domain-specific examples (SEC EDGAR, FDA), beyond the schema's basic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 entities side by side in one call, with specific data fields for company and drug types. It distinguishes itself by noting it replaces 8-15 sequential agent calls, making its purpose unique among siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool (comparing entities) and implies it replaces multiple calls, but does not explicitly state when not to use it or provide alternative tools for edge cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it performs a search based on natural language queries and returns relevant tools with names and descriptions. However, it lacks details on rate limits, error handling, or performance characteristics, which would be helpful for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage guidance. Both sentences earn their place by providing essential information without redundancy. It is appropriately sized and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with 2 parameters) and no output schema, the description is mostly complete. It explains the purpose, usage, and behavior well, but could benefit from mentioning the return format more explicitly (e.g., list of tools with metadata) since there's no output schema provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description does not add any additional meaning or context beyond what the schema provides for the parameters. It mentions the query concept but doesn't elaborate on syntax or usage beyond the schema's examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resource ('tool catalog'), and explicitly distinguishes it from sibling tools by emphasizing its role for discovery among 500+ tools. It provides a clear action and target, making it easy to understand what the tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), including a specific condition (500+ tools) and alternative context (vs. not using it). It clearly directs the agent on optimal usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses that it returns pipeworx:// citation URIs and replaces many calls, but lacks details on error handling, performance, rate limits, or required permissions. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient and well-structured: one sentence for main purpose, one for data details, one for benefit and alternative. No wasted words, front-loaded with key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description adequately explains return content (pipeworx:// URIs) and includes a list of data sources. For a complex profile tool, this covers what the agent needs to know about inputs, outputs, and when to use alternatives.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-documented. Description adds value by explaining type limitation to company, value formats (ticker or CIK), and clarifying that names are not supported. Provides context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides a full profile across multiple packs, listing specific data sources (SEC filings, XBRL, patents, news, LEI) and distinguishing from sequential calls. The verb 'profile' accurately describes the resource and scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use (instead of 10-15 sequential calls) and when not (federal contracts call usa_recipient_profile directly). Also implies prerequisite: use resolve_entity if only name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. 'Delete' implies a destructive mutation, but it doesn't disclose whether this is permanent, requires specific permissions, has side effects (e.g., affecting other tools), or what happens on success/failure. For a destructive tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's front-loaded with the core action ('Delete') and resource, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's destructive nature, lack of annotations, and no output schema, the description is incomplete. It doesn't cover behavioral aspects like permanence, error handling, or return values, which are critical for safe usage. The high schema coverage doesn't compensate for these gaps in a mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'key' documented as 'Memory key to delete'. The description adds no additional meaning beyond this, such as key format, examples, or constraints. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the verb 'Delete' strongly implies a destructive operation versus retrieval or storage.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing memory key), exclusions, or relationships to sibling tools like 'recall' (likely for retrieval) or 'remember' (likely for storage).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_cveAInspect
Get full details for a specific CVE (e.g., "CVE-2021-44228"). Returns description, severity, CVSS score, affected products, and remediation info. Use when you need comprehensive vulnerability analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| cve_id | Yes | CVE identifier, e.g. "CVE-2021-44228" |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | CVE identifier |
| status | Yes | Vulnerability status |
| severity | Yes | Severity rating (CRITICAL, HIGH, MEDIUM, LOW, NONE) |
| published | Yes | Publication date in ISO 8601 format |
| cvss_score | Yes | CVSS base score (v3.1 preferred, falls back to v2) |
| description | Yes | English description of the vulnerability |
| last_modified | Yes | Last modification date in ISO 8601 format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return content ('full details including description, severity, and affected products'), which is valuable behavioral information. However, it lacks details on error handling, rate limits, authentication needs, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and includes essential details without waste. Every part earns its place by clarifying the action, input, and output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no annotations, no output schema), the description is reasonably complete. It covers purpose, input example, and return content. However, without an output schema, it could benefit from more detail on the structure of 'full details' to aid the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'cve_id' well-documented in the schema. The description adds minimal value by reinforcing the example format, but does not provide additional semantics beyond what the schema already specifies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch a specific CVE') and resource ('by its ID'), with an explicit example ('CVE-2021-44228'). It distinguishes from sibling tools by specifying retrieval of a single CVE rather than recent or search operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'a specific CVE by its ID,' suggesting this tool is for known CVE lookups. However, it does not explicitly state when to use this versus the 'recent_cves' or 'search_cves' alternatives, nor does it provide exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses rate limit and free usage, and advises on content, but does not describe what happens after sending (e.g., confirmation, response). No annotations exist, so description carries full burden.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences with clear front-loading of purpose, efficient use of words, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple feedback tool, but missing description of post-submission behavior (e.g., no confirmation mentioned). Output schema absent, so description should cover that.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so parameters are well-documented. The description reinforces usage but adds minimal new semantic value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool sends feedback to the Pipeworx team, listing specific use cases (bug reports, feature requests, etc.), and is distinct from sibling tools that query data or manage memory.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (bug reports, feature requests, etc.) and provides a negative guideline (avoid including end-user prompt verbatim) but does not explicitly contrast with alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the dual functionality (retrieve by key vs list all) and persistence across sessions, which are valuable behavioral traits. However, it doesn't address important aspects like error handling (what happens if key doesn't exist), performance characteristics, or format of returned memories. The description adds meaningful context but leaves gaps in behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence states the core functionality with conditional behavior, and the second sentence provides important context about session persistence. There's zero wasted language, and the information is front-loaded with the most critical usage information first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (dual functionality, session persistence) with no annotations and no output schema, the description does well but has gaps. It covers the core operations and temporal scope effectively. However, without an output schema, the description doesn't explain what format memories are returned in or what the list operation returns. For a memory retrieval tool, understanding the return format is important but not addressed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter. The description adds valuable semantic context by explaining the conditional behavior: 'Retrieve a previously stored memory by key, or list all stored memories (omit key).' This clarifies that omitting the parameter triggers a different operation (listing) rather than being an error. Since there's only one parameter, the baseline would be 4, and the description enhances understanding of the parameter's role.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings like 'remember' (store) and 'forget' (delete) by focusing on retrieval operations. The description explicitly mentions retrieving context saved earlier in current or previous sessions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Retrieve a previously stored memory by key, or list all stored memories (omit key).' It also specifies the alternative behavior based on parameter presence. The context about retrieving from current or previous sessions gives clear temporal boundaries for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It describes the parallel fan-out to three sources, output structure (structured changes, count, URIs), and accepted input formats. However, it does not state whether the operation is read-only, require authentication, or have rate limits. Since it's a data retrieval tool, missing a read-only hint is a minor gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with the core purpose. Each sentence adds substantive information (behavior, parameter guidance, output format). It is concise without being terse, though it could be slightly more structured with bullet points for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately covers returns (structured changes, count, URIs) and explains the parallel fan-out for 'company'. For a tool with 3 parameters and no nesting, it provides sufficient context for an agent to understand scope and outcome. Lacks details on error handling or edge cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining the meaning of 'since' with examples (ISO date or relative), clarifying 'value' as ticker or CIK, and noting that 'type' is currently limited to 'company'. This goes beyond the schema's brief descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'What's new about an entity since a given point in time' and elaborates on the specific behavior for the only supported type ('company'). It distinguishes itself from sibling tools like 'entity_profile' (static data) and 'compare_entities' (comparison) by focusing on temporal changes across multiple sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly suggests use cases: 'brief me on what happened with X' or change-monitoring workflows. It provides guidance on the 'since' parameter format and typical values. However, it does not explicitly mention when not to use this tool or list alternatives beyond implying its specialization.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_cvesCInspect
Get CVEs published within a date range (use ISO 8601 format, e.g., "2024-01-01T00:00:00.000Z"). Returns CVE IDs, descriptions, and severity. Use to track newly disclosed vulnerabilities.
| Name | Required | Description | Default |
|---|---|---|---|
| end | Yes | End date in ISO 8601 format (e.g. "2024-01-31T23:59:59.000Z") | |
| limit | No | Maximum number of results to return (default 10, max 2000) | |
| start | Yes | Start date in ISO 8601 format (e.g. "2024-01-01T00:00:00.000Z") |
Output Schema
| Name | Required | Description |
|---|---|---|
| cves | Yes | List of CVEs published within the date range |
| returned | Yes | Number of CVEs returned in this response |
| total_results | Yes | Total number of CVEs published in the date range |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the date format requirement but does not cover other behavioral traits such as rate limits, authentication needs, error handling, or what the return format looks like (e.g., list of CVEs with details). This leaves significant gaps for a tool that fetches data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a specific format requirement. It is appropriately sized with zero waste, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of fetching CVEs (a data retrieval operation with potential for large results), no annotations, and no output schema, the description is incomplete. It lacks information on return values (e.g., structure of CVE data), pagination, error cases, or performance considerations, which are important for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents all parameters (start, end, limit) with descriptions and defaults. The description adds value by reinforcing the ISO 8601 format requirement with an example, but does not provide additional meaning beyond what the schema offers, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Fetch' and the resource 'CVEs published within a date range', making the purpose specific and understandable. However, it does not explicitly differentiate this tool from its siblings 'get_cve' and 'search_cves', which would be needed for a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings 'get_cve' and 'search_cves'. It only specifies a date format requirement, which is more about parameter semantics than usage context. No explicit alternatives or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the persistence differences between authenticated users ('persistent memory') and anonymous sessions ('last 24 hours'), and the cross-tool context functionality. However, it doesn't mention potential limitations like storage capacity, rate limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly sized and front-loaded with the core purpose in the first sentence. Every sentence earns its place: the first states what it does, the second provides usage context, and the third adds important behavioral context about persistence. There's zero wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with no annotations and no output schema, the description provides good coverage of the tool's behavior and usage. It explains what the tool does, when to use it, and important persistence characteristics. The main gap is the lack of information about return values or error conditions, but given the tool's relative simplicity, the description is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents both parameters thoroughly. The description doesn't add significant meaning beyond what the schema provides - it mentions 'key-value pair' but doesn't elaborate on parameter usage, constraints, or best practices beyond what's in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('store') and resource ('key-value pair in your session memory'), and distinguishes it from siblings by specifying it's for saving data across tool calls. It provides concrete examples of what to store ('intermediate findings, user preferences, or context'), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but doesn't explicitly mention when not to use it or name specific alternatives. It implies usage scenarios but lacks explicit exclusions or comparisons with sibling tools like 'forget' or 'recall'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses return values (ticker, CIK, name, URIs) and that it's a single call. It does not mention side effects or auth needs, but as a read-like operation, the transparency is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. The first sentence states the purpose, the second provides details and example. Every sentence adds value and is well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description comprehensively explains return values and the tool's value proposition (replacing 2-3 calls). It covers version, type limitation, and accepted input formats.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and descriptions are clear. The description adds examples and the v1 limitation, enhancing understanding beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool resolves an entity to canonical IDs, gives a concrete example with company type, and specifies the accepted inputs and outputs. It distinguishes from sibling tools which are about queries and memory.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description indicates when to use (single call to replace multiple lookups) and provides version and type context. It does not explicitly state when not to use or name alternatives, but the use case is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_cvesBInspect
Search for CVE vulnerabilities by keyword. Returns CVE ID, description, severity, and CVSS score. Use when researching security threats or checking if a known vulnerability affects your systems.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (default 10, max 2000) | |
| query | Yes | Keyword(s) to search in CVE descriptions |
Output Schema
| Name | Required | Description |
|---|---|---|
| cves | Yes | List of CVE records matching the search |
| returned | Yes | Number of CVEs returned in this response |
| total_results | Yes | Total number of CVEs matching the search query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the return format (CVE ID, description, severity, CVSS score) but lacks critical behavioral details such as pagination, rate limits, authentication needs, error handling, or data freshness. For a search tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste: the first states the purpose, and the second specifies the return format. It is front-loaded and appropriately sized, earning its place efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with two parameters) and no annotations or output schema, the description is minimally adequate. It covers purpose and return format but lacks behavioral context and usage guidelines, making it incomplete for optimal agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters (query and limit). The description adds no parameter-specific information beyond implying keyword search, which is already covered in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search CVE vulnerabilities by keyword' specifies the verb (search) and resource (CVE vulnerabilities). It distinguishes from 'get_cve' (likely fetch single CVE) and 'recent_cves' (likely fetch recent CVEs without search), but doesn't explicitly contrast them, keeping it at 4 rather than 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_cve' or 'recent_cves'. It mentions what the tool does but offers no context on appropriate use cases, exclusions, or comparisons with siblings, leaving the agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description discloses the types of claims supported, the authoritative sources, and the return format (verdict, structured form, actual value with citation, percent delta). It also notes it's v1, implying potential limitations. This provides good transparency for a read-only fact-checking tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with about five sentences, each adding meaningful information. No wasted words, and it is well-structured to convey purpose, domain, return values, and efficiency gains.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description fully explains what it returns (verdict, structured form, value, delta, citation) and the domain scope. It is complete and leaves no ambiguity about the tool's capabilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% for the single 'claim' parameter, so baseline is 3. The description adds value with natural-language context and example formats (e.g., 'Apple's FY2024 revenue was $400 billion'), which enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fact-checks natural-language claims against authoritative sources, specifying the domain (company-financial claims for US public companies) and the verb 'Fact-check' with a clear resource. Distinguishes from siblings as no other sibling performs this specialized function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the tool replaces multiple sequential agent calls, giving context on when to use it. It specifies the supported claim domain (company-financial) and sources (SEC EDGAR + XBRL). However, it does not explicitly state when not to use it or mention alternatives, but the guidance is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!