adv_scan_file
Scan files for security vulnerabilities using customizable severity thresholds, exploit analysis, and Semgrep integration. Results are saved locally in JSON or markdown format.
Instructions
Scan a file for security vulnerabilities. Results are saved in the same directory as the target file.
Input Schema
Name | Required | Description | Default |
---|---|---|---|
include_exploits | No | Whether to include exploit examples | |
output_format | No | Output format for results (json or markdown) | json |
path | Yes | Path to the file to scan (must be a file, not a directory) | |
severity_threshold | No | Minimum severity threshold | medium |
use_llm | No | Whether to include LLM analysis prompts (for use with your client's LLM) | |
use_semgrep | No | Whether to include Semgrep analysis | |
use_validation | No | Whether to use LLM validation to filter false positives |
Input Schema (JSON Schema)
{
"properties": {
"include_exploits": {
"default": true,
"description": "Whether to include exploit examples",
"type": "boolean"
},
"output_format": {
"default": "json",
"description": "Output format for results (json or markdown)",
"enum": [
"json",
"markdown"
],
"type": "string"
},
"path": {
"description": "Path to the file to scan (must be a file, not a directory)",
"type": "string"
},
"severity_threshold": {
"default": "medium",
"description": "Minimum severity threshold",
"enum": [
"low",
"medium",
"high",
"critical"
],
"type": "string"
},
"use_llm": {
"default": false,
"description": "Whether to include LLM analysis prompts (for use with your client's LLM)",
"type": "boolean"
},
"use_semgrep": {
"default": true,
"description": "Whether to include Semgrep analysis",
"type": "boolean"
},
"use_validation": {
"default": true,
"description": "Whether to use LLM validation to filter false positives",
"type": "boolean"
}
},
"required": [
"path"
],
"type": "object"
}