Skip to main content
Glama

Server Details

Threat intel + your scans/findings/Shield posture. CVE, EPSS, KEV, package vuln lookup, DAST.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 14 of 14 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but 'lookup_cve' and 'get_weaponization_score' both focus on a single CVE, which could cause confusion. However, descriptions clarify their different outputs (full enrichment vs. numerical score).

Naming Consistency2/5

Naming conventions are inconsistent: some tools use 'get_', others 'list_', 'lookup_', 'assess_', 'scan_url', 'search_cves'. No consistent verb_noun pattern across the set.

Tool Count5/5

14 tools is well within the ideal 3-15 range, covering a broad scope of security assessment and intelligence without being excessive.

Completeness3/5

Core workflows (scanning, vulnerability lookup, threat stats) are covered, but missing update/delete operations for scans and domain verification tool. Notable gaps exist but core functionality is present.

Available Tools

14 tools
assess_dependencyAInspect

Check a single package@version for known vulnerabilities via OSV.dev (npm, PyPI, Go, Maven, NuGet, RubyGems, Packagist, crates.io, etc.). Returns advisories with CVE IDs, severity, fixed versions, and references. Free tier eligible.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesPackage name (e.g., "lodash", "django", "github.com/gorilla/mux")
versionYesExact version (e.g., "4.17.20")
ecosystemYesPackage ecosystem: npm, PyPI, Go, Maven, NuGet, RubyGems, Packagist, crates.io
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It mentions OSV.dev, free tier eligibility, and return fields, but lacks details on rate limits, authentication needs, or behavior when no vulnerabilities are found.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two succinct sentences: first states purpose and scope, second describes output and free tier. No unnecessary words, information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool, the description covers input (single package@version, ecosystems), output (advisories with CVE, severity, fixed versions, references). Missing behavior on no results, but generally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions. The description adds context by giving examples (lodash, django) and listing ecosystems, enhancing understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks a single package@version for known vulnerabilities using OSV.dev, listing supported ecosystems. This distinguishes it from siblings like assess_tech_risk (broader) and lookup_cve (for a specific CVE).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have a specific package to check, but it does not explicitly state when to use vs alternatives like assess_tech_risk or search_cves, nor does it provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

assess_tech_riskBInspect

Assess security risk for a list of technologies. Returns known CVEs affecting each technology with severity breakdown. Input: comma-separated technology names only.

ParametersJSON Schema
NameRequiredDescriptionDefault
technologiesYesComma-separated list of technology names (e.g., "Apache HTTP Server, OpenSSL, nginx"). Max 50 technologies.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It indicates a read operation returning CVEs and severity, but does not disclose authentication needs, rate limits, side effects, or whether external calls are made. The behavioral profile is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at three sentences, front-loading the purpose. The third sentence ('Input: comma-separated technology names only') is somewhat redundant with the schema, but overall it is efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description adequately covers the return value (CVEs per technology with severity) and input format. Missing context about error handling or limitations, but still relatively complete given low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing full description for the single parameter. The description adds 'Input: comma-separated technology names only,' which largely repeats the schema. It marginally emphasizes strict formatting but does not add significant new meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Assess security risk') and the resource ('list of technologies'). It specifies the output (CVEs with severity breakdown), making the purpose evident. However, it does not explicitly differentiate from sibling tools like 'assess_dependency', which might operate on similar inputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides the input format ('comma-separated technology names only') but offers no guidance on when to use this tool versus alternatives. With siblings like 'assess_dependency' and CVE lookup tools, the agent lacks criteria to choose appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_kev_recentBInspect

Get recently added entries to the CISA Known Exploited Vulnerabilities (KEV) catalog.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days to look back (1-365, default: 30)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description alone must convey behavioral traits. It discloses that entries are 'recently added' but lacks details on output format, ordering, pagination, rate limits, or data freshness, which are essential for a data retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no wasted words. However, front-loading the purpose is effective; a bit more context could be added without harming conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (one optional parameter, no output schema), the description is still incomplete. It does not specify the return format or structure of the entries, nor does it provide any contextual cues like typical use cases or limitations, which an AI agent would need to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the only parameter 'days' with its range and default. The description adds no additional semantic value beyond what the schema provides, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and the specific resource ('recently added entries to the CISA Known Exploited Vulnerabilities (KEV) catalog'), making the tool's purpose unambiguous and distinguishable from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving recent KEV entries but does not explicitly guide when to use this tool versus alternatives like search_cves or lookup_cve. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_my_postureAInspect

Get Shield WAF posture score and breakdown for a domain registered under this account. Returns 0-100 score, letter grade, per-component breakdown (origin lock, virtual patching, TLS, etc.), and edge_health (whether Shield is actually intercepting traffic). Requires API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain registered under your Sectora account
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully explains what the tool returns (score, grade, breakdown, edge_health) and its requirement (API key). It accurately implies a read-only operation without side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences that cover purpose, output, and prerequisites without any redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description provides complete information: what it does, what it returns, and what is required. No gaps are apparent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the single required parameter. The description adds no extra meaning beyond the schema, meeting the baseline for full schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Get) and resource (Shield WAF posture score and breakdown) for a specific domain, distinguishing it from sibling tools like get_scan or lookup_cve.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies the prerequisite (requires API key) and the context (domain registered under your account), making it clear when to use this tool. However, it does not explicitly mention when not to use or provide alternatives, though the specific purpose makes this less critical.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_scanAInspect

Get a scan with all its findings (full detail: title, description, evidence, remediation, CVSS). Requires API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
scan_idYesScan UUID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates a read operation ('Get') and that it returns full details. It mentions the authentication requirement. Without annotations, it provides basic behavioral context but lacks specifics on error handling, rate limits, or the structure of the response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-front-loaded sentence that efficiently conveys the purpose and key details. Every word adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema), the description adequately covers the main action and required context. It could mention the return format (e.g., JSON) but the list of fields compensates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already fully describes the parameter scan_id with its pattern and description. The tool description adds no new semantic information about the parameter, so the baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a scan with all findings, listing specific detail fields. This distinguishes it from sibling tools like 'list_my_scans' which returns a list, and 'scan_url' which creates scans.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have a specific scan ID (since the only parameter is scan_id), and notes that an API key is required. However, it does not explicitly state when to use this over sibling tools like 'list_my_scans' or 'get_scan' alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_threat_statsAInspect

Get statistics about the Sectora threat intelligence database including counts of EPSS scores, KEV entries, Nuclei templates, and exploits. No input required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It indicates a read-only operation (statistics) but does not address potential rate limits, cost, or any side effects. Adequate for a simple stat tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that immediately states the purpose and lists key items. It is front-loaded and contains no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description provides a reasonable hint about return content (counts of specific categories). It does not mention other possible fields or format, but it is sufficient for a simple stats tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters and 100% schema coverage, the baseline is 4. The description adds 'No input required,' confirming the schema's emptiness, which is helpful but not essential.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool's purpose: fetching statistics about the Sectora threat intelligence database, listing specific categories (EPSS scores, KEV entries, Nuclei templates, exploits). It distinguishes from sibling tools like get_kev_recent by focusing on aggregate counts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'No input required,' clarifying that no parameters are needed. While it does not provide when-not-to-use or alternatives, the context of sibling tools implies this is for high-level overviews.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_weaponization_scoreAInspect

Get the weaponization score (0-100) for a CVE. Factors in EPSS, KEV status, exploit availability, Nuclei templates, and CVSS. Input must be a valid CVE ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
cve_idYesCVE identifier in format CVE-YYYY-NNNNN (e.g., CVE-2024-3400)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description reveals the tool computes a score based on specific factors and requires a valid CVE ID, but it does not disclose read-only status, error handling (e.g., invalid CVE), authentication needs, or side effects. Since no annotations are provided, the description carries the burden but misses some behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. It front-loads the core purpose and immediately provides key details (score range, factors, input validation). Every sentence is essential.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description explains the output (score range) and the factors considered. It does not specify the exact return format (e.g., raw integer vs. object) or error conditions, but given the simplicity, it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema fully documents the parameter. The description adds no additional meaning beyond the schema's description (format example) and simply restates the input requirement. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves a weaponization score (0-100) for a CVE, listing specific factors. The name 'get_weaponization_score' directly reflects the action, and the tool is distinct from siblings like 'lookup_cve' which provide broader CVE details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a CVE ID is known and the weaponization score is needed, but provides no explicit guidance on when to use this tool versus alternatives (e.g., 'lookup_cve' for general information) or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_my_findingsAInspect

List the API key owner's open security findings across all scans. Use this to answer "what's my current exposure?" Filter by severity, status, or domain. Returns finding summaries; call get_scan for full detail. Requires API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax findings (1-100, default: 25)
domainNoLimit to a single domain (e.g., app.example.com)
statusNoFilter by confirmation status
severityNoComma-separated severities to include: critical, high, medium, low, info
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It correctly states the tool lists findings (read-only), requires API key, and returns summaries. It does not mention pagination or what 'open' excludes (e.g., closed findings), but overall is transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three well-structured sentences. The first sentence states the core purpose, the second provides a use case, and the third explains filtering and redirects to get_scan. Every sentence is essential and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains that outputs are summaries and directs to get_scan for full details. It covers filtering, scope, and authentication. Missing details on output fields and pagination, but overall sufficiently complete for a listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema documents each parameter. The description adds value by summarizing that parameters enable filtering by severity, status, or domain, and explains the tool's owner-specific scope beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the action (list), resource (open security findings), and scope (API key owner across all scans). It provides a specific use case ('what's my current exposure?') and distinguishes from siblings like get_scan and list_my_scans.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly recommends get_scan for full detail, implying when to use the tool (overview) and when to use alternatives. It also notes the API key requirement. However, it does not exclude other siblings or provide when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_my_scansAInspect

List the API key owner's recent scans with summary counts. Requires API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax scans (1-100, default: 25)
statusNoFilter by status (queued, running, completed, failed)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It mentions authentication and summary counts but omits important traits like pagination, date ordering, what defines 'recent', and error behavior. Minimal disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with core action and context. Every word earns its place. No unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple with two optional params and no output schema. Description covers authentication and basic result (summary counts) but lacks details on pagination, ordering, or differentiation from sibling list tools. Adequate but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers both parameters with descriptions (100% coverage). Description adds no additional meaning beyond the schema. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists the API key owner's recent scans with summary counts, using specific verb and resource. It distinguishes from siblings like get_scan (singular) and list_my_findings (different entity).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It specifies the prerequisite 'Requires API key' but offers no guidance on when to use this tool vs alternatives (e.g., get_scan for details, scan_url for new scans). No when-not or exclusionary context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_cveAInspect

Get full threat intelligence enrichment for a CVE including EPSS score, CISA KEV status, public exploits, Nuclei templates, risk level, and risk factors. Input must be a valid CVE ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
cve_idYesCVE identifier in format CVE-YYYY-NNNNN (e.g., CVE-2024-3400)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses input validation (valid CVE ID) and output content (list of enrichments). It does not mention read-only nature, rate limits, or error handling, but the disclosure is adequate for a simple lookup tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the purpose and includes a necessary constraint. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one parameter and no output schema, the description explains the output contents well. It could mention response structure or error cases, but overall it's fairly complete for the given complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear parameter description. The description reinforces the input constraint but adds no new meaning beyond the schema. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('full threat intelligence enrichment for a CVE') and lists specific inclusions (EPSS score, CISA KEV status, public exploits, etc.). This differentiates it from siblings that search or list CVEs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly requires a valid CVE ID as input, providing usage context. However, it lacks guidance on when to use this tool versus alternatives like search_cves or get_trending_cves.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_ip_reputationAInspect

Look up community IP reputation from Sectora Shield WAF network. Shows if an IP has been reported for attacks. Input must be a valid IPv4 address.

ParametersJSON Schema
NameRequiredDescriptionDefault
ipYesIPv4 address to look up (e.g., 1.2.3.4)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It states the data source (Sectora Shield WAF) and that the operation is a lookup. It does not detail rate limits, authentication needs, or error handling, but the read-only nature is implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words. Front-loaded with the core purpose, followed by a key constraint.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema), the description covers the core functionality and input constraint. However, it lacks detail on the output format or behavior for invalid IPs, which would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a well-described ip parameter (pattern, maxLength). The description reinforces the IPv4 requirement but does not add new semantic information beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb ('look up'), resource ('community IP reputation'), and expected output ('shows if an IP has been reported for attacks'). It is specific and distinguishable from sibling tools like lookup_cve or assess_dependency.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for checking IP attack history and specifies input must be a valid IPv4 address. However, it lacks explicit when-not-to-use guidance or mention of alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scan_urlAInspect

Kick off a DAST security scan against a public URL the API key owner controls. Two-step flow: first call returns a preview (target, profile, ETA, quota remaining); confirm by calling again with confirm:true to actually start the scan. Returns scan_id; poll status with get_scan. Domain must be verified in the Sectora account. Daily quota: 25 scans/24h per user. Requires API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFull URL to scan (must start with http:// or https://)
confirmNoSet to "true" to actually execute the scan. Without this, the call returns a preview only.
profileNoScan profile: quick (~2 min), standard (~10 min), deep (~30 min)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the two-step flow, preview nature, domain verification requirement, daily quota, and API key requirement. Missing explicit mention of destructive nature, but scanning is expected to be non-destructive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with 5 sentences, front-loading the purpose, then explaining the flow and constraints. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions return of scan_id and 'poll status with get_scan.' It covers input, behavior, constraints, and output. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with detailed descriptions for each parameter. The description adds context about the two-step flow and quota, which goes beyond schema. It adds meaningful value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Kick off a DAST security scan against a public URL the API key owner controls.' It specifies the verb and resource, and distinguishes from siblings like get_scan which is for polling status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the two-step flow (preview then confirm), prerequisites (domain verification), and quota (25 scans/24h). It implicitly points to get_scan for status polling. It could explicitly mention when not to use this tool or list alternatives, but the guidance is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_cvesBInspect

Search for CVEs by keyword, severity, or other filters. Query must be alphanumeric text.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch keyword (CVE ID, technology name, or description)
is_kevNoOnly show CVEs in CISA KEV catalog
severityNoFilter by severity
has_exploitNoOnly show CVEs with public exploits
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It only adds a constraint that query must be alphanumeric text, but fails to disclose other behavioral traits such as whether results are paginated, rate limits, or if it returns a list of CVEs. Insufficient for safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and front-loaded. It could be improved by adding slightly more detail without becoming verbose, but current length is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 4 parameters, no output schema, and sibling tools, the description lacks details on return format (e.g., list of CVEs, pagination) and does not explain how filters interact. Incomplete for a search tool without output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. The description adds a constraint on query format (alphanumeric) beyond schema, but does not elaborate on other parameters. Baseline 3 is appropriate as schema already documents parameters adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Search for CVEs by keyword, severity, or other filters,' which specifies the verb (search) and resource (CVEs). It is distinguishable from sibling tools like lookup_cve (single CVE) and get_trending_cves (trending list), though 'other filters' is vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing to search CVEs by keyword or filters, but does not explicitly state when to use this tool versus alternatives like lookup_cve for specific IDs or get_trending_cves for trending data. No exclusions or alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources