openclaw-skill-vetter-mcp
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@openclaw-skill-vetter-mcpVet the data-extractor skill before I install it."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
openclaw-skill-vetter-mcp
Vet ClawHub skills before installing them. Detects prompt-injection patterns, hardcoded exfiltration channels, dangerous dynamic execution, manifest/permission drift, and known typosquat dependencies. Outputs a 0-100 risk score + per-finding evidence the operator can paste into a ticket. Companion to silentwatch-mcp, openclaw-health-mcp, and openclaw-cost-tracker-mcp.
What it does
ClawHub skills are third-party code. Once installed, they run inside the operator's OpenClaw environment with whatever permissions they were granted. The 2026 ClawHavoc campaign distributed hundreds of skills with prompt-injection payloads, hardcoded webhooks, and typosquatted dependencies. Per public post-mortem analysis: 36% of ClawHub skills carried injection patterns, 8% were actively exfiltrating data.
This MCP server runs a battery of static-analysis scanners against any skill's directory and produces a single VetReport that an operator can act on:
> claude: vet the data-extractor skill before I install it.
[MCP tool: vet_skill]
Skill 'data-extractor': BLOCK — do not install.
Risk score: 100/100. Findings: 1 critical, 4 high, 1 info.
Critical:
EXFIL.WEBHOOK_DISCORD (extract.py:5) —
Hardcoded Discord webhook URL: 'https://discord.com/api/webhooks/...'
Recommendation: Refuse install unless explicitly justified.
High:
AST.OS_SYSTEM (extract.py:14) — os.system('curl ... | bash')
EXFIL.ENV_DUMP (extract.py:9) — dumps full os.environ
MANIFEST.WILDCARD_PERMISSION — `network.http: *`
...
Vet result for data-extractor: REFUSE INSTALL.> claude: any flagged skills currently installed?
[MCP tool: flagged_skills_report]
2 skills flagged at REVIEW or BLOCK:
- data-extractor BLOCK risk_score=100 1 CRITICAL EXFIL.WEBHOOK_DISCORD
- markdown-formatter REVIEW risk_score=35 1 HIGH AST.EVAL_CALL on user inputWhy openclaw-skill-vetter-mcp
Three things existing tools (manual code review, generic SAST, ClawHub trust scores) don't do:
Skill-aware scanning. Generic SAST tools don't know what an OpenClaw skill manifest looks like. They miss the most common malware shape: a "calculator" skill that requests
network.http: *. The vetter cross-checks declared purpose against requested permissions.Risk score the operator can paste into a ticket. Not "high cyclomatic complexity" —
BLOCK — Discord webhook at extract.py:5. Each finding hasrule_id,file:line,evidence, and a specific recommendation.Built for review-before-install, not after-the-fact audit. Run it from inside Claude on a skill you're about to add. Get a verdict in seconds. Refuse the install if it's BLOCK; sandbox-test if REVIEW; install if CLEAN.
Built for the production-AI operator who has been bitten (or doesn't want to be) by ClawHavoc-style supply-chain attacks.
Tool surface
Tool | What it returns |
| Full VetReport for one skill: risk_score, risk_level, sorted findings, summary |
| Aggregate report across every skill in the directory + per-bucket counts |
| Lightweight: just bucket counts + flagged skill IDs |
| Just REVIEW + BLOCK skills with their findings |
| Focused: only prompt-injection findings on one skill |
| Focused: only exfiltration findings on one skill |
| Catalog of every rule the server applies (transparency) |
Resources:
skill-vetter://overview— installed-skills risk overviewskill-vetter://flagged— currently-flagged skillsskill-vetter://rules— detection rules catalog
Prompts:
pre-install-skill-check— vet a specific skill before installationweekly-skill-audit— compose a 200-word weekly audit of all installed skills
Quickstart
Install
pip install openclaw-skill-vetter-mcpConfigure for Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"openclaw-skill-vetter": {
"command": "python",
"args": ["-m", "openclaw_skill_vetter_mcp"],
"env": {
"OPENCLAW_SKILL_VETTER_BACKEND": "mock"
}
}
}
}Backends
Backend | Status | Description |
| ✅ v1.0 | 6 demo skills with deliberate findings spanning all severities — for protocol verification and README/CLI demos |
| ✅ v1.0 | Reads |
| ⏳ v1.1 | Fetches a candidate skill from the ClawHub registry directly for vet-before-install workflows |
Skill manifest format
Each skill directory contains a skill.yaml (or skill.json):
id: weather-fetch
name: Weather Fetch
version: 1.0.0
author: verified-publisher@openclaw.example
description: Fetches current weather for a city using OpenWeatherMap.
purpose: Live weather data lookup
runtime: python3.11
entry_point: main.py
permissions:
- network.http: api.openweathermap.org
dependencies:
- requests>=2.31
- pydantic>=2.0
signature: ed25519:abcd1234efgh5678Plus the actual code files (*.py, *.js, *.ts, *.sh, *.rb, *.go, *.rs) and any prompt files (*.prompt, *.md, *.txt).
If your OpenClaw deployment uses a different on-disk shape, see the Custom MCP Build section below.
Detection rules (v1.0)
Four scanner modules cover the v1.0 ruleset:
Manifest — MANIFEST.MISSING, MANIFEST.PURPOSE_NETWORK_DRIFT, MANIFEST.WILDCARD_PERMISSION, MANIFEST.BROAD_FILESYSTEM_WRITE, MANIFEST.EMPTY_DESCRIPTION, MANIFEST.NO_AUTHOR, MANIFEST.UNSIGNED
Static patterns (text regex over code + prompts) —
Prompt-injection:
PROMPT_INJ.IGNORE_PRIOR,PROMPT_INJ.ROLE_OVERRIDE,PROMPT_INJ.EXTRACT_SYSTEM,PROMPT_INJ.JAILBREAK_DAN,PROMPT_INJ.NEW_USER_MARKERExfiltration:
EXFIL.WEBHOOK_DISCORD,EXFIL.WEBHOOK_SLACK,EXFIL.WEBHOOK_TELEGRAM,EXFIL.PASTEBIN_LITERAL,EXFIL.SSH_KEY_READ,EXFIL.AWS_CREDS_READ,EXFIL.ENV_DUMP,EXFIL.SUBPROCESS_CURLDynamic execution:
DYN_EXEC.SHELL_TRUE,DYN_EXEC.OS_SYSTEM,DYN_EXEC.EVAL_LITERAL,DYN_EXEC.EXEC_LITERAL,DYN_EXEC.PICKLE_LOADS,DYN_EXEC.DYNAMIC_IMPORTObfuscation:
OBFUSCATION.LARGE_BASE64,OBFUSCATION.LARGE_HEX
Python AST (catches what regex misses) — AST.EVAL_CALL, AST.EXEC_CALL, AST.COMPILE_CALL, AST.OS_SYSTEM, AST.OS_POPEN, AST.OS_EXECV, AST.SUBPROCESS_RUN_SHELL_TRUE, AST.SUBPROCESS_POPEN_SHELL_TRUE, AST.DYNAMIC_IMPORT
Dependencies — DEP.TYPOSQUAT, DEP.HOMOGLYPH, DEP.UNTRUSTED_GIT_SOURCE, DEP.LOCAL_PATH
Use list_detection_rules to query the live catalog.
Risk scoring
Each finding contributes by severity:
Severity | Weight |
CRITICAL | 40 |
HIGH | 15 |
MEDIUM | 5 |
LOW | 1 |
INFO | 0 |
Final risk_score = min(sum, 100). Bucketing (first match wins):
Bucket | Trigger |
BLOCK | ≥1 CRITICAL or score ≥ 80 |
REVIEW | ≥1 HIGH or score ≥ 50 |
CAUTION | ≥1 MEDIUM or score ≥ 20 |
CLEAN | no findings or only INFO |
Conservative-by-design: false positives are OK, missed criticals are not. If your operator workflow disagrees with a specific rule, you can filter by category on the client side, or fork + customize.
Roadmap
Version | Scope | Status |
v1.0 | mock + openclaw-skills-dir backends, 7 tools / 3 resources / 2 prompts, 4 scanner modules with 41 detection rules, GitHub Actions CI matrix, PyPI Trusted Publishing | ✅ |
v1.1 |
| ⏳ |
v1.2 | Sandbox-execution scanner (run skill in isolated process, observe network attempts); whitelist/allowlist per-operator | ⏳ |
v1.x | Custom rule packs; integration with existing SAST tools; per-rule severity overrides | ⏳ |
Need this adapted to your stack?
If your AI deployment doesn't use the OpenClaw skill format — different agent harness, custom skill schema, monolithic skill files, internal-registry distribution — and you want the same vet-before-install discipline, that's a Custom MCP Build engagement.
Tier | Scope | Investment | Timeline |
Simple | Single backend adapter for your existing skill format | $8,000–$12,000 | 1–2 weeks |
Standard | Custom backend + custom rule pack tuned to your ecosystem + CI integration | $15,000–$25,000 | 2–4 weeks |
Complex | Multi-format ingestion + sandbox-execution + signed-publisher allowlist + rule-tuning workshop | $30,000–$45,000 | 4–8 weeks |
To engage:
Email temur@pixelette.tech with subject
Custom MCP Build inquiry — skill vettingInclude: 1-paragraph description of your skill ecosystem + which tier you're considering
Reply within 2 business days with a 30-min discovery call slot
This server is part of a production-AI infrastructure MCP suite — companion to silentwatch-mcp, openclaw-health-mcp, and openclaw-cost-tracker-mcp. Install all four for full operational visibility.
Production AI audits
If you're running production AI and want an outside practitioner to score readiness, find the failure patterns already present (ClawHavoc-style skill malware being one of the most damaging), and write the corrective-action plan:
Tier | Scope | Investment | Timeline |
Audit Lite | One system, top-5 findings, written report | $1,500 | 1 week |
Audit Standard | Full audit, all 14 patterns, 5 Cs findings, 90-day follow-up | $3,000 | 2–3 weeks |
Audit + Workshop | Standard audit + 2-day team workshop + first monthly audit included | $7,500 | 3–4 weeks |
Same email channel: temur@pixelette.tech with subject AI audit inquiry.
Contributing
PRs welcome. Scanners are pluggable — see src/openclaw_skill_vetter_mcp/scanners/ for the contract.
To add a new scanner:
Create
scanners/<your_scanner>.pyexportingSCANNER_NAME: stranddef scan(skill: Skill) -> list[Finding]Optionally export
def all_rules() -> list[tuple[...]]for the rules catalogRegister in
analysis.vet_skill(the orchestrator iterates over a fixed tuple of scanner modules)Add tests in
tests/test_scanners.py
To add a new backend:
Subclass
SkillBackendinbackends/<your_backend>.pyImplement
get_skills,get_skill_by_id,get_directoryRegister in
backends/__init__.pyAdd tests in
tests/test_backend_<your_backend>.py
Bug reports + feature requests: open a GitHub issue. False-positive reports: include the skill snippet that fired the wrong rule and we'll tune.
License
MIT — see LICENSE.
Related
silentwatch-mcp — cron silent-failure detection
openclaw-health-mcp — deployment health
openclaw-cost-tracker-mcp — token-cost telemetry
AI Production Discipline Framework — Notion template, $29 — methodology these MCPs implement
SPEC.md — full server design
Built by Temur Khan — independent practitioner on production AI systems. Contact: temur@pixelette.tech
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/temurkhan13/openclaw-skill-vetter-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server