vet_skill
Runs manifest, static, AST, and dependency scanners on a skill, returning a risk score and per-finding evidence. Use before installing a skill to detect prompt-injection, exfiltration, or security issues.
Instructions
Run all scanners on a single skill — manifest, static patterns, AST, dependencies. Returns a VetReport with risk_score (0-100), risk_level (BLOCK/REVIEW/CAUTION/CLEAN), per-finding details, and a one-paragraph summary. Use this before installing a skill.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| skill_id | Yes | Skill ID to vet. Must exist in the configured backend (default: ~/.openclaw/skills/<skill_id>/). |
Implementation Reference
- The core handler function that runs all four scanners (manifest, static, ast_check, dependencies) on a Skill, computes risk score and level, and returns a VetReport.
def vet_skill(skill: Skill) -> VetReport: """Run every scanner on a skill and produce the consolidated report.""" all_findings: list[Finding] = [] scanners_run: list[str] = [] for scanner_module in (manifest, static, ast_check, dependencies): sc_findings = scanner_module.scan(skill) all_findings.extend(sc_findings) scanners_run.append(scanner_module.SCANNER_NAME) sorted_findings = _sort_findings(all_findings) score = _compute_risk_score(sorted_findings) risk_level = _bucket_risk(score, sorted_findings) return VetReport( skill_id=skill.skill_id, skill_path=skill.root_path, captured_at=datetime.now(UTC), risk_score=score, risk_level=risk_level, findings=sorted_findings, summary=_build_summary(skill, sorted_findings, risk_level), scanners_run=scanners_run, ) - src/openclaw_skill_vetter_mcp/server.py:45-66 (registration)Registers the 'vet_skill' tool with MCP, defining its name, description, and inputSchema (requires skill_id).
Tool( name="vet_skill", description=( "Run all scanners on a single skill — manifest, static patterns, AST, " "dependencies. Returns a VetReport with risk_score (0-100), risk_level " "(BLOCK/REVIEW/CAUTION/CLEAN), per-finding details, and a one-paragraph summary. " "Use this before installing a skill." ), inputSchema={ "type": "object", "properties": { "skill_id": { "type": "string", "description": ( "Skill ID to vet. Must exist in the configured backend " "(default: ~/.openclaw/skills/<skill_id>/)." ), }, }, "required": ["skill_id"], }, ), - The call_tool handler that dispatches 'vet_skill' requests: extracts skill_id, fetches the skill via backend, calls vet_skill(), and serializes the VetReport.
if name == "vet_skill": skill_id = str(arguments.get("skill_id", "")).strip() if not skill_id: return [TextContent(type="text", text=json.dumps({"error": "skill_id is required"}))] skill = await backend.get_skill_by_id(skill_id) if skill is None: return [TextContent(type="text", text=json.dumps({"error": f"Skill {skill_id!r} not found"}))] return _serialize(vet_skill(skill)) - Helper used by vet_skill to compose the one-paragraph summary string for the VetReport.
def _build_summary(skill: Skill, findings: list[Finding], risk_level: RiskLevel) -> str: by_severity = Counter(f.severity for f in findings) if risk_level == RiskLevel.BLOCK: verdict = "BLOCK — do not install" elif risk_level == RiskLevel.REVIEW: verdict = "REVIEW — operator review required before install" elif risk_level == RiskLevel.CAUTION: verdict = "CAUTION — proceed with awareness of flagged items" elif risk_level == RiskLevel.UNKNOWN: verdict = "UNKNOWN — could not parse skill" else: verdict = "CLEAN — no security findings" counts: list[str] = [] for sev in (Severity.CRITICAL, Severity.HIGH, Severity.MEDIUM, Severity.LOW, Severity.INFO): n = by_severity[sev] if n > 0: counts.append(f"{n} {sev.value}") finding_str = (", ".join(counts)) if counts else "no findings" return f"Skill {skill.skill_id!r}: {verdict}. Findings: {finding_str}." - Helper used by vet_skill to bucket risk level based on score and findings severity.
def _bucket_risk(score: int, findings: list[Finding]) -> RiskLevel: by_severity = Counter(f.severity for f in findings) if by_severity[Severity.CRITICAL] > 0 or score >= 80: return RiskLevel.BLOCK if by_severity[Severity.HIGH] > 0 or score >= 50: return RiskLevel.REVIEW if by_severity[Severity.MEDIUM] > 0 or score >= 20: return RiskLevel.CAUTION return RiskLevel.CLEAN