Skip to main content
Glama
temurkhan13

openclaw-skill-vetter-mcp

by temurkhan13

vet_skill

Runs manifest, static, AST, and dependency scanners on a skill, returning a risk score and per-finding evidence. Use before installing a skill to detect prompt-injection, exfiltration, or security issues.

Instructions

Run all scanners on a single skill — manifest, static patterns, AST, dependencies. Returns a VetReport with risk_score (0-100), risk_level (BLOCK/REVIEW/CAUTION/CLEAN), per-finding details, and a one-paragraph summary. Use this before installing a skill.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
skill_idYesSkill ID to vet. Must exist in the configured backend (default: ~/.openclaw/skills/<skill_id>/).

Implementation Reference

  • The core handler function that runs all four scanners (manifest, static, ast_check, dependencies) on a Skill, computes risk score and level, and returns a VetReport.
    def vet_skill(skill: Skill) -> VetReport:
        """Run every scanner on a skill and produce the consolidated report."""
        all_findings: list[Finding] = []
        scanners_run: list[str] = []
    
        for scanner_module in (manifest, static, ast_check, dependencies):
            sc_findings = scanner_module.scan(skill)
            all_findings.extend(sc_findings)
            scanners_run.append(scanner_module.SCANNER_NAME)
    
        sorted_findings = _sort_findings(all_findings)
        score = _compute_risk_score(sorted_findings)
        risk_level = _bucket_risk(score, sorted_findings)
    
        return VetReport(
            skill_id=skill.skill_id,
            skill_path=skill.root_path,
            captured_at=datetime.now(UTC),
            risk_score=score,
            risk_level=risk_level,
            findings=sorted_findings,
            summary=_build_summary(skill, sorted_findings, risk_level),
            scanners_run=scanners_run,
        )
  • Registers the 'vet_skill' tool with MCP, defining its name, description, and inputSchema (requires skill_id).
    Tool(
        name="vet_skill",
        description=(
            "Run all scanners on a single skill — manifest, static patterns, AST, "
            "dependencies. Returns a VetReport with risk_score (0-100), risk_level "
            "(BLOCK/REVIEW/CAUTION/CLEAN), per-finding details, and a one-paragraph summary. "
            "Use this before installing a skill."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "skill_id": {
                    "type": "string",
                    "description": (
                        "Skill ID to vet. Must exist in the configured backend "
                        "(default: ~/.openclaw/skills/<skill_id>/)."
                    ),
                },
            },
            "required": ["skill_id"],
        },
    ),
  • The call_tool handler that dispatches 'vet_skill' requests: extracts skill_id, fetches the skill via backend, calls vet_skill(), and serializes the VetReport.
    if name == "vet_skill":
        skill_id = str(arguments.get("skill_id", "")).strip()
        if not skill_id:
            return [TextContent(type="text", text=json.dumps({"error": "skill_id is required"}))]
        skill = await backend.get_skill_by_id(skill_id)
        if skill is None:
            return [TextContent(type="text", text=json.dumps({"error": f"Skill {skill_id!r} not found"}))]
        return _serialize(vet_skill(skill))
  • Helper used by vet_skill to compose the one-paragraph summary string for the VetReport.
    def _build_summary(skill: Skill, findings: list[Finding], risk_level: RiskLevel) -> str:
        by_severity = Counter(f.severity for f in findings)
        if risk_level == RiskLevel.BLOCK:
            verdict = "BLOCK — do not install"
        elif risk_level == RiskLevel.REVIEW:
            verdict = "REVIEW — operator review required before install"
        elif risk_level == RiskLevel.CAUTION:
            verdict = "CAUTION — proceed with awareness of flagged items"
        elif risk_level == RiskLevel.UNKNOWN:
            verdict = "UNKNOWN — could not parse skill"
        else:
            verdict = "CLEAN — no security findings"
    
        counts: list[str] = []
        for sev in (Severity.CRITICAL, Severity.HIGH, Severity.MEDIUM, Severity.LOW, Severity.INFO):
            n = by_severity[sev]
            if n > 0:
                counts.append(f"{n} {sev.value}")
    
        finding_str = (", ".join(counts)) if counts else "no findings"
        return f"Skill {skill.skill_id!r}: {verdict}. Findings: {finding_str}."
  • Helper used by vet_skill to bucket risk level based on score and findings severity.
    def _bucket_risk(score: int, findings: list[Finding]) -> RiskLevel:
        by_severity = Counter(f.severity for f in findings)
        if by_severity[Severity.CRITICAL] > 0 or score >= 80:
            return RiskLevel.BLOCK
        if by_severity[Severity.HIGH] > 0 or score >= 50:
            return RiskLevel.REVIEW
        if by_severity[Severity.MEDIUM] > 0 or score >= 20:
            return RiskLevel.CAUTION
        return RiskLevel.CLEAN
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description must cover behavior. It states it runs scanners and returns a report, implying a non-destructive analysis. However, it does not explicitly confirm no side effects, auth needs, or potential performance impact.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first describes action and output, second gives usage guidance. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one simple parameter and no output schema, the description adequately covers what the tool does, what it returns, and when to use it. Complete for its complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter, 'skill_id,' with full schema description. The tool description adds no additional meaning beyond the schema; it just rephrases the requirement that the skill must exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it runs all scanners on a single skill, listing specific scanner types (manifest, static patterns, AST, dependencies) and output details. Distinguishes from siblings like 'vet_skill_directory' and 'flagged_skills_report'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises 'Use this before installing a skill,' providing clear usage context. Does not explicitly rule out alternatives, but the recommendation is strong.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/temurkhan13/openclaw-skill-vetter-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server