Skip to main content
Glama

search_cves

Read-only

Search the NIST CVE database to identify cybersecurity vulnerabilities by keyword, severity, product, weakness type, or date range. Filter results to focus on CISA Known Exploited Vulnerabilities for targeted threat analysis.

Instructions

Search the NVD CVE database. Supports keyword, CVSS severity, CPE product, CWE weakness type, and date range filters. Set has_kev=True for only CISA Known Exploited Vulnerabilities. Results include CVE ID, description, severity, and score. May take 6+ seconds without an NVD API key due to rate limiting.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
keywordNoKeyword search across CVE descriptions, e.g. 'Apache Log4j'
severityNoCVSS v3 severity: CRITICAL, HIGH, MEDIUM, or LOW
cpe_nameNoCPE 2.3 product name, e.g. 'cpe:2.3:a:apache:log4j:*'
cwe_idNoCWE weakness ID, e.g. 'CWE-79'
pub_startNoPublication start date in ISO 8601, e.g. '2024-01-01T00:00:00.000'
pub_endNoPublication end date in ISO 8601, e.g. '2024-12-31T23:59:59.999'
has_kevNoIf True, only return CVEs that are in the CISA KEV catalog
limitNo
offsetNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses performance characteristics ('May take 6+ seconds'), rate limiting implications ('due to rate limiting'), and a specific filter behavior ('Set has_kev=True for only CISA Known Exploited Vulnerabilities'). Annotations cover read-only and open-world aspects, but the description enhances this with practical constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: the first states purpose and filters, the second specifies a key parameter behavior, and the third covers performance and rate limits. Every sentence adds value with zero waste, making it front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, read-only/open-world annotations, and an output schema), the description is complete. It covers purpose, usage context, behavioral traits, and performance constraints. With an output schema present, it doesn't need to explain return values, and it adequately supplements the structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 78% schema description coverage, the schema already documents most parameters well. The description mentions the filter types (keyword, severity, CPE, CWE, date, KEV) but doesn't add significant semantic details beyond what's in the schema descriptions. It meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search the NVD CVE database') and resources ('CVE database'), and distinguishes it from siblings by specifying the exact type of search (CVE-focused vs. other search tools like search_controls or search_cpes). It goes beyond a simple restatement of the name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by listing the supported filter types (keyword, severity, CPE, CWE, date range, KEV flag), which helps differentiate it from alternatives. However, it doesn't explicitly state when NOT to use it or name specific sibling tools as alternatives, keeping it at a 4.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ball2jh/nist-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server