Skip to main content
Glama

vet_command_chain

Scans chained shell commands for destructive fragments (e.g., rm -rf) hidden after &&, ||, or ;, escalating severity because operators may miss them on quick review.

Instructions

Vet a chained / multi-statement shell command — same rules as vet_command, but escalates LOW→MEDIUM and MEDIUM→HIGH because destructive fragments nested deep inside a chain (after &&, ;, or |) are easier for the operator to overlook on a quick read. Use this for any command containing &&, ||, ;, or piped subshells. The exact failure mode this targets: r/LocalLLaMA 'one bash permission slipped' (1.5k upvotes) — agent proposed a chained command, operator pattern-matched the lede, missed rm -rf deep in the chain.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
commandYesThe chained shell command to vet

Implementation Reference

  • Tool registration in list_tools() — defines the 'vet_command_chain' tool with its name, description, and inputSchema (command string required).
    Tool(
        name="vet_command_chain",
        description=(
            "Vet a chained / multi-statement shell command — same rules as "
            "`vet_command`, but escalates LOW→MEDIUM and MEDIUM→HIGH because "
            "destructive fragments nested deep inside a chain (after `&&`, `;`, "
            "or `|`) are easier for the operator to overlook on a quick read. "
            "Use this for any command containing &&, ||, ;, or piped subshells. "
            "The exact failure mode this targets: r/LocalLLaMA 'one bash "
            "permission slipped' (1.5k upvotes) — agent proposed a chained "
            "command, operator pattern-matched the lede, missed `rm -rf` deep "
            "in the chain."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "command": {
                    "type": "string",
                    "description": "The chained shell command to vet",
                },
            },
            "required": ["command"],
        },
    ),
  • Tool dispatch handler in call_tool() — routes 'vet_command_chain' to vet_command(command, command_chain=True).
    if name == "vet_command_chain":
        command = str(arguments.get("command", ""))
        return _serialize(vet_command(command, command_chain=True))
  • Core scanner function vet_command() with command_chain parameter — when True, escalates LOW→MEDIUM and MEDIUM→HIGH severity findings.
    def vet_command(command: str, *, command_chain: bool = False) -> CommandVetReport:
        """Scan a shell command for destructive patterns. Returns a CommandVetReport.
    
        `command_chain=True` raises severity by one level for chained commands
        (because nested destructive fragments are easier to overlook on quick read).
        """
        if not command.strip():
            return CommandVetReport(
                verdict=Verdict.UNVERIFIED,
                risk_score=0,
                finding_count=0,
                findings=[],
                summary="No command provided.",
                parse_error=None,
            )
    
        parsed_ok, parse_error = _try_bashlex_parse(command)
    
        findings = _scan_with_regex(command)
    
        # If chain mode + we have findings, escalate any LOW/MEDIUM by one tier (because
        # nested patterns in chained commands are easier to overlook on quick read).
        if command_chain and findings:
            escalated: list[CommandFinding] = []
            for f in findings:
                if f.severity == Severity.LOW:
                    escalated.append(f.model_copy(update={"severity": Severity.MEDIUM}))
                elif f.severity == Severity.MEDIUM:
                    escalated.append(f.model_copy(update={"severity": Severity.HIGH}))
                else:
                    escalated.append(f)
            findings = escalated
    
        # Sort by severity desc, then position asc
        severity_rank = {Severity.CRITICAL: 4, Severity.HIGH: 3, Severity.MEDIUM: 2, Severity.LOW: 1, Severity.INFO: 0}
        findings.sort(key=lambda f: (-severity_rank[f.severity], f.position or 0))
    
        score = _risk_score(findings)
        verdict = _verdict_from_findings(findings)
    
        if not parsed_ok and not findings:
            # Can't parse + nothing matched regex — be honest
            return CommandVetReport(
                verdict=Verdict.UNVERIFIED,
                risk_score=0,
                finding_count=0,
                findings=[],
                summary="Could not parse the input as bash; no regex rules matched either. Inspect manually.",
                parse_error=parse_error,
            )
    
        if not findings:
            summary = "No destructive patterns detected. Command appears safe to execute."
        elif verdict == Verdict.BLOCK:
            worst = findings[0]
            summary = (
                f"BLOCK — {len(findings)} finding(s); worst is {worst.severity.upper()} "
                f"({worst.rule_id}): {worst.description}"
            )
        elif verdict == Verdict.REVIEW:
            summary = f"REVIEW — {len(findings)} medium-severity finding(s). Sandbox-test or pair-review before running."
        else:  # CAUTION
            summary = f"CAUTION — {len(findings)} low-severity finding(s). Likely safe but document if intentional."
    
        return CommandVetReport(
            verdict=verdict,
            risk_score=score,
            finding_count=len(findings),
            findings=findings,
            summary=summary,
            parse_error=parse_error if not parsed_ok else None,
        )
  • CommandVetReport model — response schema shared by vet_command and vet_command_chain.
    class CommandVetReport(BaseModel):
        """Response for `vet_command` and `vet_command_chain`."""
    
        model_config = ConfigDict(frozen=True)
    
        verdict: Verdict
        risk_score: int
        """0–100. Severity-weighted: CRITICAL=40, HIGH=15, MEDIUM=5, LOW=1, INFO=0; capped at 100."""
        finding_count: int
        findings: list[CommandFinding]
        summary: str
        parse_error: str | None = None
        """Set if the input wasn't parseable as bash. Verdict will be UNVERIFIED."""
  • Chain-mode escalation logic — iterates findings and bumps LOW→MEDIUM, MEDIUM→HIGH when command_chain=True.
    if command_chain and findings:
        escalated: list[CommandFinding] = []
        for f in findings:
            if f.severity == Severity.LOW:
                escalated.append(f.model_copy(update={"severity": Severity.MEDIUM}))
            elif f.severity == Severity.MEDIUM:
                escalated.append(f.model_copy(update={"severity": Severity.HIGH}))
            else:
                escalated.append(f)
        findings = escalated
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It explains severity escalation (LOW→MEDIUM, MEDIUM→HIGH) and cites a real incident, but it does not specify what the vetting result is (e.g., risk score, flag, block) or how to interpret the output. This leaves some ambiguity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is about 4 sentences and relatively concise. The anecdote about r/LocalLLaMA adds context but could be trimmed. Front-loads the core purpose. Slightly verbose but acceptable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one required parameter, no output schema, and no annotations, the description adequately covers usage context and behavior difference from sibling. However, it lacks details on response format, error cases, or what success/failure looks like. For a simple tool, this might be sufficient, but more completeness would help.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, baseline is 3. The description adds nuance by clarifying what 'chained' means (&&, ||, ;, piped subshells), but this largely echoes the schema's description of 'The chained shell command to vet' without adding significant new meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool vets chained shell commands and escalates severity compared to vet_command. It provides specific examples of chain operators (&&, ||, ;, piped subshells) and references a real-world failure mode, effectively distinguishing it from its sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use this for any command containing &&, ||, ;, or piped subshells.' The description also explains the rationale (operator oversight). It does not explicitly state when not to use, but the sibling name implies single commands should go to vet_command.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/temurkhan13/bash-vet-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server