Exploit Intelligence Platform — CVE, Vulnerability and Exploit Database
Server Details
Real-time CVE, exploit, and vulnerability intelligence for AI assistants (350K+ CVEs, 115K+ PoCs)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 17 of 17 tools scored.
Each tool has a clearly distinct purpose with minimal overlap; for example, get_vulnerability provides a full intelligence brief, while search_vulnerabilities is for broader searches, and get_exploit_code retrieves source code separately from get_exploit_analysis. The descriptions specify when to use each tool, such as using search_exploits for structured filtering versus search_vulnerabilities for free-text queries, preventing confusion.
Tool names follow a consistent verb_noun pattern throughout, such as get_vulnerability, list_vendors, search_exploits, and audit_stack. There are no deviations in naming conventions, making the set predictable and easy for an agent to parse and understand.
With 17 tools, the server is well-scoped for its purpose as a comprehensive vulnerability and exploit database. Each tool serves a specific function, from auditing stacks to retrieving exploit details, without redundancy, and the count aligns with the complexity of the domain, covering CRUD-like operations and advanced analytics.
The tool set provides complete coverage for the domain, including vulnerability lookup, exploit analysis, author and vendor listings, health checks, and report generation. There are no obvious gaps; tools like generate_finding and get_nuclei_templates extend functionality beyond basic queries, ensuring agents can handle full workflows from discovery to reporting.
Available Tools
17 toolsaudit_stackARead-onlyIdempotentInspect
Audit a technology stack for exploitable vulnerabilities. Accepts a comma-separated list of technologies (max 5) and searches for critical/ high severity CVEs with public exploits for each one, sorted by EPSS exploitation probability. Use this when a user describes their infrastructure and wants to know what to patch first. Example: technologies='nginx, postgresql, node.js' returns a risk-sorted list of exploitable CVEs grouped by technology. Rate-limit cost: each technology requires up to 2 API calls; 5 technologies counts as up to 10 calls toward your rate limit.
| Name | Required | Description | Default |
|---|---|---|---|
| technologies | Yes | Comma-separated list of technologies (e.g. 'nginx, postgresql, node.js'). Max 5. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure for a tool with no annotations: reveals filtering logic (critical/high severity only, public exploits required), sorting method (EPSS exploitation probability), output structure (grouped by technology), and critical rate-limit costs (2 calls per technology, max 10). No contradictions; carries full behavioral burden effectively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences each earning their place: purpose, input spec, usage trigger, example, and rate-limit warning. Front-loaded with the core action, logically sequenced, and no redundancy with schema fields despite covering the same constraints. Efficient density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a single-parameter tool without output schema: describes return format (risk-sorted CVE list grouped by technology), constraint limits (max 5), and API costs. No gaps remain for agent invocation decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), description adds value by contextualizing the parameter constraints within operational limits (rate-limit costs scale with technology count) and integrating the example 'nginx, postgresql, node.js' into the workflow explanation rather than just listing format requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Audit' targeting 'technology stack' for 'exploitable vulnerabilities', clearly distinguishing from siblings like search_vulnerabilities (general search) or get_vulnerability (single lookup) by emphasizing multi-technology infrastructure assessment and prioritization ('what to patch first').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit trigger condition: 'Use this when a user describes their infrastructure and wants to know what to patch first.' Clear context distinguishes it from single-CVE lookup tools, though it could strengthen further by explicitly contrasting with search_exploits or search_vulnerabilities siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_healthARead-onlyIdempotentInspect
Check the EIP API health and data freshness. Returns database status and timestamps for each of the 10 ingestion sources (NVD, KEV, EPSS, ExploitDB, GitHub, Metasploit, etc.).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It discloses detailed return behavior (database status, timestamps for 10 ingestion sources) and enumerates specific sources (NVD, KEV, EPSS, etc.), effectively compensating for missing output schema. Lacks explicit safety/cost notes, though 'Check' implies read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence establishes purpose (health/freshness), second details specific return structure (10 sources named). Front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a zero-parameter health check. Compensates for missing output schema by detailing exactly what gets returned (status, timestamps) and enumerating the 10 ingestion sources that are checked.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains 0 parameters, establishing baseline score of 4. Description correctly omits parameter discussion since none exist, focusing entirely on behavior and return values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Check' with resource 'EIP API health' and scope 'data freshness'. Clearly distinguishes from sibling data-query tools like get_vulnerability or search_exploits by focusing on system health rather than content retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through 'Check... health' semantics (use when verifying system status), but lacks explicit guidance such as 'Call this before other operations to verify freshness' or contrast with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_findingARead-onlyIdempotentInspect
Generate a pentest report finding in Markdown format for a specific vulnerability. Fetches full detail and formats it as a professional finding with severity, CVSS, description, affected products, exploit availability, and references. Accepts both CVE-IDs and EIP-IDs. Optionally include the target system tested and tester notes. The output is ready to paste into a pentest report. Example: cve_id='CVE-2024-3400', target='fw.corp.example.com', notes='Confirmed RCE via GlobalProtect gateway'.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | Tester notes to include in the finding. Optional. | |
| cve_id | Yes | CVE or EIP identifier (e.g. 'CVE-2024-3400') | |
| target | No | Target system tested (e.g. 'fw.corp.example.com'). Optional. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description carries burden well by disclosing it 'fetches full detail' (external lookup), outputs specific fields (severity, CVSS, references), and produces immediate usable output. Minor gap: doesn't mention side effects, rate limits, or whether generation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfect structure: purpose → output details → ID types accepted → optional params → output readiness → example. Zero waste. Each sentence advances understanding without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates well by enumerating output components (severity, CVSS, affected products, etc.) and format. For a 3-param tool with 100% schema coverage, sufficiently complete despite missing annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds value by clarifying EIP-IDs are accepted alongside CVE-IDs (not obvious from schema), and provides concrete example showing parameter interaction semantics (cve_id format, target syntax, notes content).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent: specific verb 'generate' + resource 'pentest report finding' + format 'Markdown'. Distinguishes from sibling 'get_vulnerability' by emphasizing 'professional finding' formatting and 'ready to paste' output rather than raw data retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through 'pentest report' context and optional parameter descriptions, but lacks explicit when-to-use vs siblings (e.g., doesn't state 'use this instead of get_vulnerability when you need formatted output'). The example helps clarify but doesn't establish decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_authorARead-onlyIdempotentInspect
Get an exploit author's profile with all their exploits. Returns author name, handle, total exploit count, activity start date, and a paginated list of their exploits with CVE context. Use this when asked about a specific researcher like 'show me all exploits by Chocapikk'.
| Name | Required | Description | Default |
|---|---|---|---|
| author_name | Yes | Author name (e.g. 'Chocapikk') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden. It discloses pagination behavior ('paginated list') and specific return fields ('author name, handle, total exploit count, activity start date, and... exploits with CVE context'), providing necessary context about data volume and structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences: first establishes purpose and return value structure, second provides usage guidance with example. Information is front-loaded and every clause earns its place—no redundancy with schema or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by enumerating return fields and pagination behavior. For a single-parameter retrieval tool, the description is appropriately complete, though it could optionally mention expected behavior when an author is not found.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). The description adds semantic value by providing the concrete example 'Chocapikk' in the usage sentence, clarifying that 'author_name' expects researcher handles/names rather than numeric IDs or other identifiers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with clear resource 'exploit author's profile' and scope 'with all their exploits'. It clearly distinguishes from sibling tool 'list_authors' (which implies listing multiple authors) by focusing on retrieving a specific single author's complete profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: 'Use this when asked about a specific researcher like...' with concrete example query 'show me all exploits by Chocapikk'. This effectively contrasts with 'list_authors' (browse/discover) and 'search_exploits' (search by content, not author).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_cweARead-onlyIdempotentInspect
Get details for a specific CWE including full name, description, exploit likelihood, parent CWE, and total vulnerability count. Example: cwe_id='CWE-79' returns details about Cross-Site Scripting.
| Name | Required | Description | Default |
|---|---|---|---|
| cwe_id | Yes | CWE identifier (e.g. 'CWE-79' or '79') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It effectively compensates by enumerating exactly what data is returned (full name, description, exploit likelihood, parent, count), though it omits explicit read-only/safety declarations or rate limit notes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste: the first front-loads the action and return payload, the second provides a concrete usage example. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description appropriately documents return values by listing specific fields. With only one parameter and no complex nested objects, this level of documentation is adequate, though explicit mention of error behavior would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value beyond the schema by including a concrete example (CWE-79 maps to Cross-Site Scripting) that clarifies expected input format and semantic meaning of the identifier.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves detailed information for a specific CWE (verb+resource), and distinguishes it from sibling 'list_cwes' by listing rich return fields (exploit likelihood, parent CWE, vulnerability count) that imply a detailed single-record lookup versus a summary list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a concrete example ('cwe_id='CWE-79' returns details about Cross-Site Scripting') that implicitly guides correct usage, but lacks explicit when-to-use guidance versus 'list_cwes' or prerequisites for the identifier format.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_exploit_analysisARead-onlyIdempotentInspect
Get the full AI analysis for a single exploit by its platform ID. Returns classification (working_poc, trojan, suspicious, scanner, stub, writeup), attack type, complexity, reliability, confidence score, authentication requirements, target software, a summary of what the exploit does, prerequisites, MITRE ATT&CK techniques, deception indicators for trojans, and the standalone backdoor-review verdict with operator-risk notes when available. Use this to check if an exploit is safe before reviewing its code. Example: exploit_id=61514 returns a TROJAN warning with deception indicators.
| Name | Required | Description | Default |
|---|---|---|---|
| exploit_id | Yes | Platform exploit ID (the [id=XXXXX] number from results — NOT the EDB number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It comprehensively documents return values including classification enum values (working_poc, trojan, etc.), conditional fields ('when available'), and risk indicators ('TROJAN warning'). However, it omits mention of error handling, caching behavior, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with zero waste: Sentence 1 states purpose, Sentence 2 enumerates return fields, Sentence 3 gives usage context, Sentence 4 provides concrete example. Information-dense yet logically sequenced with most critical information front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description effectively documents the complex return structure (classification types, MITRE techniques, deception indicators). Single parameter is well-covered. Could improve by noting 'not found' error behavior or cache staleness, but adequate for tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage and clearly documents the exploit_id parameter including distinguishing platform ID from EDB numbers. The description adds a concrete example (61514), but with complete schema coverage, baseline 3 is appropriate given minimal additional semantic value needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states specific verb ('Get'), resource ('full AI analysis for a single exploit'), and identifier ('platform ID'). It distinguishes from sibling 'get_exploit_code' by emphasizing 'AI analysis' vs raw code, and from 'search_exploits' by focusing on a single item by ID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use this to check if an exploit is safe before reviewing its code.' This establishes a clear workflow (safety check before code access) and implies the alternative (reviewing code directly via get_exploit_code), though it doesn't explicitly name the sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_exploit_codeARead-onlyIdempotentInspect
Retrieve the source code of a specific exploit by its platform ID. IMPORTANT: Use the platform's internal ID shown as [id=XXXXX] in results, NOT the ExploitDB number (EDB-XXXXX). These are different numbering systems. Returns code from the exploit archive. If no file_path is specified, auto-selects the most relevant code file. Use this to analyze exploit mechanics, understand attack techniques, or review PoC code.
| Name | Required | Description | Default |
|---|---|---|---|
| file_path | No | Relative path inside the exploit archive (optional — auto-selects if omitted). Absolute paths and traversal patterns are rejected. | |
| exploit_id | Yes | Platform exploit ID (the [id=XXXXX] number from results — NOT the EDB number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full disclosure burden. Explains auto-selection behavior when file_path is omitted, clarifies return value ('Returns code from the exploit archive'), and details the ID numbering confusion. Lacks rate limit or error behavior but covers primary behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences with zero redundancy. Critical ID warning placed early with 'IMPORTANT' flag. Logical flow: purpose → critical constraint → return behavior → parameter behavior → use cases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a 2-parameter retrieval tool without annotations. Covers resource type, identification method, file selection logic, and security constraints (via schema pattern). Absence of output schema is noted by description's return value statement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description reinforces exploit_id semantics (platform vs EDB ID) and file_path auto-selection behavior, adding emphasis but not new schema information. Adequate given complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Retrieve) and resource (source code of exploit), clearly identifying it as a code retrieval tool. Explicitly distinguishes from other tools by specifying 'platform ID' and contrasting with 'ExploitDB number' to prevent confusion with lookup_alt_id or search_exploits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage constraint with 'IMPORTANT' tag clarifying platform ID vs EDB number distinction. States use cases ('analyze exploit mechanics...'). Could explicitly contrast with sibling get_exploit_analysis but provides sufficient context for correct invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_nuclei_templatesARead-onlyIdempotentInspect
Get Nuclei scanner templates and recon dorks for a vulnerability. Returns template metadata, severity, verification status, tags, and ready-to-use Shodan, FOFA, and Google dork queries for target identification. Accepts both CVE-IDs and EIP-IDs. Use this to plan scanning or reconnaissance.
| Name | Required | Description | Default |
|---|---|---|---|
| cve_id | Yes | CVE or EIP identifier (e.g. 'CVE-2024-27198') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and adequately discloses return values (metadata, severity, tags, dorks) and input flexibility (CVE/EIP acceptance). However, it omits safety profile (read-only nature), rate limits, or cache behavior that would be essential for a tool without annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly constructed sentences front-load the purpose, detail returns, specify inputs, and provide usage context with zero redundancy. Each sentence delivers distinct, essential information without repetition of schema or structural metadata.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and focused scope, the description adequately compensates for the missing output schema by enumerating return components (metadata, dorks, tags). It sufficiently covers the tool's contract, though additional notes on error conditions or data freshness would strengthen completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the cve_id parameter fully documented as 'CVE or EIP identifier.' The description reinforces this with 'Accepts both CVE-IDs and EIP-IDs' but adds no additional semantic detail about format constraints or validation rules beyond the schema definition, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and specific resources ('Nuclei scanner templates and recon dorks'), distinguishing it from siblings like get_exploit_code (raw code) or get_vulnerability (general info). The scope is precisely bounded to template retrieval and reconnaissance queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit context with 'Use this to plan scanning or reconnaissance,' establishing clear when-to-use guidance. However, it lacks explicit comparisons to siblings like search_exploits or get_exploit_analysis that might retrieve similar vulnerability data through different mechanisms.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_platform_statsARead-onlyIdempotentInspect
Get platform-wide statistics from the Exploit Intelligence Platform. Returns total counts of vulnerabilities, exploits, KEV entries, Nuclei templates, vendors, and authors, plus the last data update timestamp.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It effectively compensates for the missing output schema by explicitly listing return values (counts of vulnerabilities, exploits, KEV entries, etc., plus timestamp), clarifying what data the operation produces.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences. The first identifies the operation and scope; the second details the return payload. Every word earns its place with no redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters, no annotations, no output schema), the description is complete. It defines the domain scope, specifies the platform context, and exhaustively lists the return values that would otherwise be undocumented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool accepts zero parameters. Per the scoring guidelines, 0 parameters establishes a baseline score of 4. No additional parameter context is required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') + resource ('platform-wide statistics') + domain context ('Exploit Intelligence Platform'). It clearly distinguishes this aggregate statistics tool from sibling specific-entity getters like get_author or get_vulnerability, and from search_tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying it retrieves aggregate counts rather than individual records, distinguishing it from specific lookups. However, it lacks explicit guidance on when to use this versus check_health or specific search tools, and does not mention prerequisites or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vulnerabilityARead-onlyIdempotentInspect
Get a full intelligence brief for a specific vulnerability. Accepts both CVE-IDs (e.g. CVE-2024-3400) and EIP-IDs (e.g. EIP-2026-12345 for pre-CVE entries). Returns detailed information including CVSS score and vector, EPSS exploitation probability, CISA KEV status, description, affected products, ranked exploits (grouped by Metasploit modules, verified ExploitDB, GitHub PoCs, and trojans), Nuclei scanner templates with recon dorks, alternate identifiers, and references. Exploits are ranked by quality: Metasploit modules first (peer-reviewed), then verified ExploitDB, then GitHub by stars. Trojans are flagged at the bottom.
| Name | Required | Description | Default |
|---|---|---|---|
| cve_id | Yes | CVE or EIP identifier (e.g. 'CVE-2024-3400' or 'EIP-2026-12345') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and comprehensively discloses return structure including CVSS/EPSS scores, exploit ranking methodology (Metasploit > ExploitDB > GitHub), and security warnings about trojans at the bottom.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense and well-structured (purpose → input → output details), though lengthy, every sentence earns its place by explaining the complex ranked exploit output structure which is critical for tool use.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists and the tool returns complex data (exploits ranked by quality, scanner templates, multiple scoring systems), the description provides complete coverage of expected return values and their organization.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with clear description, but the tool description adds valuable semantic context that EIP-IDs are for pre-CVE entries, helping users understand when to use that format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (Get) and resource (full intelligence brief for a specific vulnerability), with specific examples distinguishing it from sibling search/list tools by emphasizing retrieval of a single vulnerability by ID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides input format guidance (CVE vs EIP IDs) and explains EIP is for pre-CVE entries, but lacks explicit guidance on when to use this tool vs siblings like search_vulnerabilities or lookup_alt_id.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_authorsARead-onlyIdempotentInspect
List exploit authors/researchers ranked by exploit count. Returns the top security researchers with their exploit counts and handles. Use this when asked 'who are the top exploit authors?' or 'who writes the most exploits?'
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default: 1) | |
| per_page | No | Results per page (1-50, default: 25) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It discloses ranking behavior ('ranked by exploit count') and return payload ('exploit counts and handles'), but omits mutation behavior (implied read-only by 'List'), pagination limits, or rate limiting context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: purpose first, return value second, usage trigger third. Zero redundancy; every sentence earns its place. Appropriate length for tool complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Good coverage for a list query tool: explains ranking logic, return fields, and usage context. Minor gap: could explicitly mention pagination behavior (though clear in schema) to signal large result sets, but adequate given parameter simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for both parameters (page, per_page with defaults/ranges). Description adds no redundant parameter detail, which is appropriate when schema is self-documenting. Baseline 3 acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'List exploit authors/researchers ranked by exploit count' provides clear verb (List), resource (authors/researchers), and distinguishing scope (ranked by exploit count). Differentiates from sibling 'get_author' by emphasizing the ranked/quantitative nature vs. individual lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Strong usage signals with explicit example queries: 'Use this when asked who are the top exploit authors?' However, lacks explicit when-NOT-to-use guidance or mention of sibling 'get_author' for specific author lookups rather than ranked lists.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_cwesARead-onlyIdempotentInspect
List CWE (Common Weakness Enumeration) categories ranked by vulnerability count. Returns CWE IDs, names, short labels, exploit likelihood, and how many CVEs have that weakness. Use this when asked 'what are the most common vulnerability types?'
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It successfully documents the return structure (IDs, names, labels, likelihood, counts), but omits other behavioral traits like pagination behavior, rate limits, or explicit read-only safety confirmation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded with the core action and return payload, followed immediately by usage context. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by enumerating specific return fields. Adequate for a parameterless listing tool, though could be strengthened with pagination or result limiting behavior given it returns a ranked list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, warranting baseline score per rubric. Schema is empty object with 100% coverage trivially satisfied. No parameter documentation needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'List' + resource 'CWE categories' + scope 'ranked by vulnerability count'. Clearly distinguishes from sibling 'get_cwe' (singular retrieval) by emphasizing the ranked listing of multiple weaknesses, while also detailing the specific data fields returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance with the quoted user query pattern ('what are the most common vulnerability types?'). However, lacks explicit when-not-to-use guidance or a named alternative when the user needs a specific CWE rather than a ranked list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_productsARead-onlyIdempotentInspect
List products for a specific vendor with vulnerability counts. Use this to discover exact product names for filtering. Product names in the database use CPE conventions (e.g. 'exchange_server' not 'exchange', 'windows_10' not 'windows 10'). Example: vendor='microsoft' returns products like exchange_server, windows_10, office, edge_chromium.
| Name | Required | Description | Default |
|---|---|---|---|
| vendor | Yes | Vendor name (e.g. 'microsoft', 'apache', 'fortinet') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that results include 'vulnerability counts', explains critical CPE naming conventions (e.g., 'exchange_server' vs 'exchange'), and provides concrete input/output examples. Lacks operational details like pagination or error cases, but covers data format behavior essential for correct usage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each earning its place: purpose, usage guideline, data format convention, and concrete example. Front-loaded with action verb and resource. Zero redundancy or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter list tool with no output schema, description adequately explains the return value concept ('vulnerability counts') and critical data conventions (CPE). Minor gap in explicit output structure description, but sufficient given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (vendor param documented), establishing baseline of 3. Description adds value by illustrating the vendor-product relationship with concrete example ('vendor=microsoft' returns specific products) and explaining CPE conventions that clarify what constitutes valid product identifiers in this domain.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'List' with resource 'products' and scope 'for a specific vendor with vulnerability counts'. Clearly distinguishes from sibling 'list_vendors' by specifying it returns products for a given vendor, not a list of vendors.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use this to discover exact product names for filtering'. Provides clear workflow context (discovery phase before filtering), though it doesn't explicitly name which sibling tool to use for the subsequent filtering step.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_vendorsARead-onlyIdempotentInspect
List software vendors ranked by vulnerability count. Returns the top 200 vendors with their total CVE counts. Use this when asked 'which vendors have the most vulnerabilities?' or to understand the threat landscape by vendor.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses the 200-item limit ('top 200'), ranking criteria ('ranked by vulnerability count'), and return data ('total CVE counts'). Missing data freshness or pagination details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first defines action and behavioral constraints, second provides usage examples. Perfectly front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a no-parameter tool without output schema. Describes return values conceptually ('top 200 vendors with their total CVE counts') though could enumerate specific fields. Completes the picture sufficiently for selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters. Per rubric, baseline is 4 for zero-parameter tools. Description appropriately requires no parameter explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'List' with resource 'software vendors' and distinguishes from siblings via 'ranked by vulnerability count'—contrasting with list_products (which lists products) and list_authors (which lists people).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage triggers ('when asked which vendors have the most vulnerabilities') and context ('to understand the threat landscape'). Lacks explicit named alternatives or exclusions (e.g., doesn't mention when to use list_products instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_alt_idARead-onlyIdempotentInspect
Look up a vulnerability by an alternate identifier such as an ExploitDB ID (EDB-XXXXX) or GitHub Security Advisory ID (GHSA-XXXXX). Returns the matching CVE-ID with basic severity info. Use this when you have an EDB number or GHSA ID and need to find the corresponding CVE.
| Name | Required | Description | Default |
|---|---|---|---|
| alt_id | Yes | Alternate ID (e.g. 'EDB-48537', 'GHSA-jfh8-c2jp-5v3q') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so description carries full burden. It discloses return values ('matching CVE-ID with basic severity info') which compensates for missing output_schema. However, omits safety traits (read-only status), error handling (what happens on no match), or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences front-loaded with action and resource. Minor redundancy between 'alternate identifier such as...' and 'when you have an EDB number or GHSA ID', but both sentences serve distinct purposes (purpose definition vs. usage trigger). No filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool, description adequately compensates for missing output_schema by specifying return format (CVE-ID + severity). Sufficiently complete given low complexity, though mentioning 'not found' behavior would strengthen it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with technical description. Description adds valuable domain semantics by expanding acronyms (ExploitDB ID, GitHub Security Advisory ID) and reinforcing the ID patterns, helping the agent understand what constitutes a valid alternate identifier beyond just the parameter name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Look up', resource 'vulnerability by alternate identifier', and concrete examples (EDB-XXXXX, GHSA-XXXXX). Distinguishes from sibling get_vulnerability (likely takes CVE directly) and search tools by specifying exact ID format requirements.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit when-to-use clause: 'Use this when you have an EDB number or GHSA ID and need to find the corresponding CVE.' Lacks explicit exclusions (e.g., 'do not use if you already have a CVE ID') but effectively signals the tool's specific niche via input type requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_exploitsARead-onlyIdempotentInspect
Browse and filter exploits using STRUCTURED FILTERS ONLY (no free-text query). Use this to filter by source (github, metasploit, exploitdb, nomisec, gitlab, inthewild, vulncheck_xdb, patchapalooza), language (python, ruby, etc.), LLM classification (working_poc, trojan, suspicious, scanner, stub, writeup, tool, no_code), author, min stars, code availability, CVE ID, vendor, or product. Also filter by AI analysis: attack_type (RCE, SQLi, XSS, DoS, LPE, auth_bypass, info_leak), complexity (trivial/simple/moderate/complex), reliability (reliable/unreliable/untested/theoretical), requires_auth. NOTE: To search by product name (e.g. 'OpenSSH', 'Apache'), use search_vulnerabilities instead — it has free-text query and get_vulnerability already includes exploits in the response. Examples: source='metasploit' for all Metasploit modules; attack_type='RCE' with reliability='reliable' for weaponizable RCE exploits; cve='CVE-2024-3400' for all exploits targeting a specific CVE; vendor='mitel' for all Mitel exploits.
| Name | Required | Description | Default |
|---|---|---|---|
| cve | No | Filter by CVE ID (e.g. 'CVE-2024-3400') — returns all exploits for that CVE | |
| page | No | Page number (default: 1) | |
| sort | No | Sort order | |
| author | No | Filter by author name | |
| source | No | Filter by source | |
| vendor | No | Filter by vendor name (e.g. 'mitel', 'fortinet') — returns exploits for all CVEs affecting that vendor | |
| product | No | Filter by product name (e.g. 'micollab', 'pan-os') | |
| has_code | No | Only exploits with downloadable code | |
| language | No | Filter by language: python, ruby, go, c, etc. | |
| per_page | No | Results per page (1-25, default: 10) | |
| min_stars | No | Minimum GitHub stars | |
| complexity | No | Filter by exploit complexity | |
| attack_type | No | Filter by attack type from AI analysis (case-insensitive on input; canonical casing returned) | |
| reliability | No | Filter by exploit reliability | |
| requires_auth | No | Filter by whether exploit requires authentication | |
| llm_classification | No | Filter by LLM classification |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the critical behavioral constraint (structured filters only, no free-text query) and clarifies tool relationships (get_vulnerability already includes exploits). Could improve by mentioning pagination defaults or empty result behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Despite length, every sentence serves a distinct purpose: scope definition, filter enumeration, sibling differentiation, and usage examples. The examples section is particularly high-value for agent decision-making. Minor deduction for slight redundancy between filter list and schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 16 parameters with rich filtering capabilities and no output schema, the description adequately covers the filtering paradigm, enumerates all major filter categories, provides usage examples, and clarifies integration with sibling tools. Missing only operational details like rate limiting or specific return structure, which are less critical for a read-only search operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description adds categorical grouping (e.g., clustering AI analysis parameters: attack_type, complexity, reliability) and provides concrete value examples, but largely mirrors the schema's parameter documentation without adding significant semantic depth beyond the structured filter constraint.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a precise action (browse/filter) and resource (exploits), explicitly constrains the scope to STRUCTURED FILTERS ONLY, and immediately distinguishes from free-text alternatives. It clearly identifies what makes this tool unique compared to siblings like search_vulnerabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit when-not-to-use guidance ('NOTE: To search by product name... use search_vulnerabilities instead') and references sibling tool get_vulnerability as an alternative source for exploits. The examples section provides concrete decision-making patterns (e.g., when to filter by source vs attack_type).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_vulnerabilitiesARead-onlyIdempotentInspect
Search the Exploit Intelligence Platform for vulnerabilities (CVEs). Returns a list of matching CVEs with CVSS scores, EPSS exploitation probability, exploit counts, CISA KEV status, VulnCheck KEV, InTheWild.io exploitation signals, and ransomware attribution. Supports full-text search, severity/vendor/product/ecosystem/CWE filters, CVSS/EPSS thresholds, plus any_exploited and ransomware filters. When sort is omitted, the API may automatically prefer newest exploitation, exploit, or nuclei-template activity based on the filters you set. Examples: query='apache httpd' with has_exploits=true; vendor='fortinet' with severity='critical' and is_kev=true sorted by epss_desc; any_exploited=true with ransomware=true for ransomware-linked CVEs; cwe='89' with min_cvss=9 for critical SQL injection CVEs.
| Name | Required | Description | Default |
|---|---|---|---|
| cwe | No | Filter by CWE ID (e.g. '79' or 'CWE-79') | |
| page | No | Page number (default: 1) | |
| sort | No | Sort order. Aliases are normalized to the current server schema. | |
| year | No | Filter by CVE year (e.g. 2024) | |
| query | No | Search keywords (e.g. 'apache httpd', 'log4j'). Optional if filters are provided. | |
| is_kev | No | Only return CISA Known Exploited Vulnerabilities | |
| vendor | No | Filter by vendor name (e.g. 'microsoft', 'fortinet') | |
| date_to | No | End date for CVE publication (YYYY-MM-DD) | |
| product | No | Filter by product name (e.g. 'exchange', 'pan-os') | |
| min_cvss | No | Minimum CVSS v3 score (0-10) | |
| min_epss | No | Minimum EPSS score (0-1) | |
| per_page | No | Results per page (1-25, default: 10) | |
| severity | No | Filter by severity level | |
| date_from | No | Start date for CVE publication (YYYY-MM-DD) | |
| ecosystem | No | Filter by package ecosystem | |
| min_score | No | Minimum score for the selected score_version (0-10) | |
| has_nuclei | No | Only return CVEs with Nuclei scanner templates | |
| ransomware | No | Only return CVEs with confirmed ransomware campaign use | |
| has_exploits | No | Only return CVEs with public exploit code | |
| any_exploited | No | Only return CVEs exploited in the wild (CISA KEV + VulnCheck KEV + InTheWild.io) | |
| score_version | No | Score family for min_score / score_desc |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully documents the return structure (CVSS, EPSS, CISA/VulnCheck KEV, ransomware attribution), data source ('Exploit Intelligence Platform'), and critical behavioral nuance: automatic sorting preference based on filter context when sort is omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense and front-loaded: purpose and return values occupy the first sentence, capability overview the second, behavioral notes the third, followed by concrete examples. Every sentence earns its place despite the length, though the examples string could be formatted for scanability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero annotations, no output schema, and 19 complex parameters, the description adequately compensates by detailing the return payload structure and providing exhaustive filter examples. It successfully bridges the output schema gap by enumerating all returned intelligence fields (CVSS, EPSS, KEV statuses, etc.).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description elevates this through rich examples that demonstrate parameter interplay and real-world valuation (e.g., 'cwe='89' with min_cvss=9 for critical SQL injection CVEs'), adding practical semantic context beyond the schema's individual field descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description immediately states 'Search the Exploit Intelligence Platform for vulnerabilities (CVEs)' with specific verb and resource. It distinguishes from sibling search_exploits (which searches exploit code) by emphasizing CVE metadata, and from get_vulnerability (singular) by specifying it 'Returns a list of matching CVEs'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it lacks explicit 'use X instead' sibling comparisons, it provides extensive concrete examples showing parameter combinations for specific use cases (e.g., 'vendor='fortinet' with severity='critical' and is_kev=true' for critical Fortinet KEVs). It also documents auto-sorting behavior when sort is omitted, guiding effective invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!