vet_command
Vet a shell command for destructive patterns before execution. Returns verdict (CLEAN/CAUTION/etc.) and risk score (0-100) to prevent data loss.
Instructions
Vet a single shell command for destructive patterns BEFORE execution. Detects rm -rf nested in chains, package-manager glob removal (apt remove 'nvidia'), dd/mkfs/wipefs filesystem destruction, chmod 777 on system paths, curl|bash network-exfil, chained shutdown/reboot, git destructive ops (push --force, reset --hard), and DROP DATABASE / TRUNCATE via cli. Returns verdict (CLEAN / CAUTION / REVIEW / BLOCK / UNVERIFIED), risk_score (0-100), and per-finding rule_id + severity + recommendation. Sub-second, local, no API key. Use inline before approving any agent-proposed command.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| command | Yes | The shell command to vet (single command or pipeline) |
Implementation Reference
- src/bash_vet_mcp/server.py:60-124 (registration)The 'vet_command' tool is registered in the `list_tools` handler with its name, description, and inputSchema (accepts a 'command' string).
return [ Tool( name="vet_command", description=( "Vet a single shell command for destructive patterns BEFORE execution. " "Detects rm -rf nested in chains, package-manager glob removal " "(apt remove '*nvidia*'), dd/mkfs/wipefs filesystem destruction, " "chmod 777 on system paths, curl|bash network-exfil, chained " "shutdown/reboot, git destructive ops (push --force, reset --hard), " "and DROP DATABASE / TRUNCATE via cli. Returns verdict (CLEAN / " "CAUTION / REVIEW / BLOCK / UNVERIFIED), risk_score (0-100), and " "per-finding rule_id + severity + recommendation. Sub-second, local, " "no API key. Use inline before approving any agent-proposed command." ), inputSchema={ "type": "object", "properties": { "command": { "type": "string", "description": "The shell command to vet (single command or pipeline)", }, }, "required": ["command"], }, ), Tool( name="vet_command_chain", description=( "Vet a chained / multi-statement shell command — same rules as " "`vet_command`, but escalates LOW→MEDIUM and MEDIUM→HIGH because " "destructive fragments nested deep inside a chain (after `&&`, `;`, " "or `|`) are easier for the operator to overlook on a quick read. " "Use this for any command containing &&, ||, ;, or piped subshells. " "The exact failure mode this targets: r/LocalLLaMA 'one bash " "permission slipped' (1.5k upvotes) — agent proposed a chained " "command, operator pattern-matched the lede, missed `rm -rf` deep " "in the chain." ), inputSchema={ "type": "object", "properties": { "command": { "type": "string", "description": "The chained shell command to vet", }, }, "required": ["command"], }, ), Tool( name="list_detection_rules", description=( "Return the catalog of every detection rule the scanner applies — " "rule_id, severity, pattern_kind, description, example_match. " "Use this to audit coverage, document detection scope to your " "compliance/security team, or build a custom allowlist. 30 rules " "across 8 families: DESTRUCTIVE / PACKAGE / PRIVILEGED / SHUTDOWN " "/ EXFIL / DATABASE / GIT / SUSPICIOUS." ), inputSchema={ "type": "object", "properties": {}, }, ), ] - src/bash_vet_mcp/scanner.py:420-491 (handler)The `vet_command` function is the core implementation — it parses the command, runs regex-based detection rules, optionally escalates severity for chained commands, computes risk score and verdict, and returns a CommandVetReport.
def vet_command(command: str, *, command_chain: bool = False) -> CommandVetReport: """Scan a shell command for destructive patterns. Returns a CommandVetReport. `command_chain=True` raises severity by one level for chained commands (because nested destructive fragments are easier to overlook on quick read). """ if not command.strip(): return CommandVetReport( verdict=Verdict.UNVERIFIED, risk_score=0, finding_count=0, findings=[], summary="No command provided.", parse_error=None, ) parsed_ok, parse_error = _try_bashlex_parse(command) findings = _scan_with_regex(command) # If chain mode + we have findings, escalate any LOW/MEDIUM by one tier (because # nested patterns in chained commands are easier to overlook on quick read). if command_chain and findings: escalated: list[CommandFinding] = [] for f in findings: if f.severity == Severity.LOW: escalated.append(f.model_copy(update={"severity": Severity.MEDIUM})) elif f.severity == Severity.MEDIUM: escalated.append(f.model_copy(update={"severity": Severity.HIGH})) else: escalated.append(f) findings = escalated # Sort by severity desc, then position asc severity_rank = {Severity.CRITICAL: 4, Severity.HIGH: 3, Severity.MEDIUM: 2, Severity.LOW: 1, Severity.INFO: 0} findings.sort(key=lambda f: (-severity_rank[f.severity], f.position or 0)) score = _risk_score(findings) verdict = _verdict_from_findings(findings) if not parsed_ok and not findings: # Can't parse + nothing matched regex — be honest return CommandVetReport( verdict=Verdict.UNVERIFIED, risk_score=0, finding_count=0, findings=[], summary="Could not parse the input as bash; no regex rules matched either. Inspect manually.", parse_error=parse_error, ) if not findings: summary = "No destructive patterns detected. Command appears safe to execute." elif verdict == Verdict.BLOCK: worst = findings[0] summary = ( f"BLOCK — {len(findings)} finding(s); worst is {worst.severity.upper()} " f"({worst.rule_id}): {worst.description}" ) elif verdict == Verdict.REVIEW: summary = f"REVIEW — {len(findings)} medium-severity finding(s). Sandbox-test or pair-review before running." else: # CAUTION summary = f"CAUTION — {len(findings)} low-severity finding(s). Likely safe but document if intentional." return CommandVetReport( verdict=verdict, risk_score=score, finding_count=len(findings), findings=findings, summary=summary, parse_error=parse_error if not parsed_ok else None, ) - src/bash_vet_mcp/types.py:58-70 (schema)CommandVetReport is the return type for vet_command — contains verdict, risk_score, finding_count, findings, summary, and parse_error.
class CommandVetReport(BaseModel): """Response for `vet_command` and `vet_command_chain`.""" model_config = ConfigDict(frozen=True) verdict: Verdict risk_score: int """0–100. Severity-weighted: CRITICAL=40, HIGH=15, MEDIUM=5, LOW=1, INFO=0; capped at 100.""" finding_count: int findings: list[CommandFinding] summary: str parse_error: str | None = None """Set if the input wasn't parseable as bash. Verdict will be UNVERIFIED.""" - src/bash_vet_mcp/scanner.py:392-404 (helper)`_scan_with_regex` iterates over all detection rules and produces findings — the core detection logic called by vet_command.
def _scan_with_regex(command: str) -> list[CommandFinding]: findings: list[CommandFinding] = [] seen: set[tuple[str, str]] = set() for rule in _RULES: regex = re.compile(rule[3], re.IGNORECASE) for m in regex.finditer(command): snippet = command[max(0, m.start() - 5) : min(len(command), m.end() + 30)].strip() key = (rule[0], snippet) if key in seen: continue seen.add(key) findings.append(_make_finding(rule, snippet, m.start())) return findings - src/bash_vet_mcp/scanner.py:53-68 (helper)`_risk_score` and `_verdict_from_findings` are helper functions used by vet_command to compute risk score (0-100) and determine the final verdict (CLEAN/CAUTION/REVIEW/BLOCK/UNVERIFIED).
def _risk_score(findings: list[CommandFinding]) -> int: score = sum(_SEVERITY_WEIGHT[f.severity] for f in findings) return min(score, 100) def _verdict_from_findings(findings: list[CommandFinding]) -> Verdict: if not findings: return Verdict.CLEAN severities = {f.severity for f in findings} if Severity.CRITICAL in severities or Severity.HIGH in severities: return Verdict.BLOCK if Severity.MEDIUM in severities: return Verdict.REVIEW if Severity.LOW in severities: return Verdict.CAUTION return Verdict.CLEAN