vigile_scan_content
Scan agent skill file content for security issues. Submit raw content from .md or .rules files for analysis to get trust score and detailed findings.
Instructions
Scan the content of an agent skill file for security issues. Submit raw content from a claude.md, .cursorrules, skill.md, or similar file for analysis. Returns trust score and detailed findings.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | The raw text content to scan (max 100KB) | |
| file_type | No | File type: skill.md, claude.md, cursorrules, mdc-rule (default: skill.md) | |
| name | No | Optional name for the scan result |
Implementation Reference
- src/tools/scan-content.ts:7-69 (handler)The core handler function that POSTs content to /api/v1/scan/skill and formats the response including trust score, findings, and recommendations.
export async function scanContent( baseUrl: string, apiKey: string, content: string, fileType?: string, name?: string ): Promise<string> { const body = { skill_name: name || "inline-scan", content, file_type: fileType || "skill.md", platform: "claude-code", source: "mcp-scan", }; const { ok, status, data } = await fetchVigile(baseUrl, apiKey, "/api/v1/scan/skill", { method: "POST", body: JSON.stringify(body), }); if (!ok) { if (status === 429) { return [ "**Scan quota exceeded.**", "", data?.detail || "You've reached your monthly scan limit.", "", "Upgrade your plan at https://vigile.dev/pricing for more scans.", ].join("\n"); } return `Scan failed: ${data?.detail || `HTTP ${status}`}`; } const emoji = trustLevelEmoji(data.trust_level); const lines = [ `## ${emoji} Scan Result: ${data.skill_name || name || "Inline Scan"}`, "", `**Trust Score:** ${formatScore(data.trust_score)}`, `**Trust Level:** ${data.trust_level}`, `**File Type:** ${data.file_type}`, `**Findings:** ${data.findings_count} total (${data.critical_count} critical, ${data.high_count} high)`, ]; // Detailed findings if (data.findings && data.findings.length > 0) { lines.push("", "### Findings"); for (const f of data.findings) { const severity = f.severity === "critical" ? "🔴" : f.severity === "high" ? "🟠" : "🟡"; lines.push(``, `#### ${severity} [${f.severity.toUpperCase()}] ${f.title}`); lines.push(f.description); if (f.evidence) { lines.push(`**Evidence:** \`${f.evidence}\``); } if (f.recommendation) { lines.push(`**Recommendation:** ${f.recommendation}`); } } } else { lines.push("", "✅ No security findings detected."); } return lines.join("\n"); } - src/index.ts:84-96 (registration)Registers the 'vigile_scan_content' tool with the MCP server, defining input schema (content, file_type, name) and delegating to the scanContent handler.
server.tool( "vigile_scan_content", "Scan the content of an agent skill file for security issues. Submit raw content from a claude.md, .cursorrules, skill.md, or similar file for analysis. Returns trust score and detailed findings.", { content: z.string().min(1).max(100_000).describe("The raw text content to scan (max 100KB)"), file_type: z.string().min(1).max(30).optional().describe("File type: skill.md, claude.md, cursorrules, mdc-rule (default: skill.md)"), name: z.string().min(1).max(200).optional().describe("Optional name for the scan result"), }, async ({ content, file_type, name }) => { const result = await scanContent(API_BASE, API_KEY, content, file_type, name); return { content: [{ type: "text" as const, text: result }] }; } ); - src/index.ts:87-91 (schema)Zod validation schema for the tool's inputs: required content (1-100K chars), optional file_type and name.
{ content: z.string().min(1).max(100_000).describe("The raw text content to scan (max 100KB)"), file_type: z.string().min(1).max(30).optional().describe("File type: skill.md, claude.md, cursorrules, mdc-rule (default: skill.md)"), name: z.string().min(1).max(200).optional().describe("Optional name for the scan result"), }, - src/tools/api.ts:5-46 (helper)Generic fetch helper used by scanContent to call the Vigile API. Handles auth headers, JSON requests, and sanitized error messages.
export async function fetchVigile( baseUrl: string, apiKey: string, path: string, options?: { method?: string; body?: string } ): Promise<{ ok: boolean; status: number; data: any }> { const headers: Record<string, string> = { "Content-Type": "application/json", "User-Agent": "vigile-mcp/0.1.7", }; if (apiKey) { headers["Authorization"] = `Bearer ${apiKey}`; } try { const res = await fetch(`${baseUrl}${path}`, { method: options?.method || "GET", headers, body: options?.body, }); const data = await res.json().catch(() => null); return { ok: res.ok, status: res.status, data }; } catch (error: any) { // Sanitize error message — don't leak internal details like // hostnames, ports, file paths, or stack traces const rawMsg = error?.message || "Unknown error"; const safeMsg = rawMsg.includes("ECONNREFUSED") || rawMsg.includes("ENOTFOUND") ? "API server unreachable" : rawMsg.includes("ETIMEDOUT") || rawMsg.includes("timeout") ? "Request timed out" : rawMsg.includes("ECONNRESET") ? "Connection reset" : "Connection failed"; return { ok: false, status: 0, data: { detail: safeMsg }, }; } } - src/tools/api.ts:48-65 (helper)Helper functions to format trust level as emoji and trust score as percentage string.
export function trustLevelEmoji(level: string): string { switch (level) { case "trusted": return "🟢"; case "caution": return "🟡"; case "risky": return "🟠"; case "dangerous": return "🔴"; default: return "⚪"; } } export function formatScore(score: number): string { return `${Math.round(score)}/100`; }