Skip to main content
Glama

vigile_scan_content

Analyze agent skill files for security vulnerabilities. Submit content from claude.md, cursorrules, or similar files to receive trust scores and detailed security findings.

Instructions

Scan the content of an agent skill file for security issues. Submit raw content from a claude.md, .cursorrules, skill.md, or similar file for analysis. Returns trust score and detailed findings.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesThe raw text content to scan (max 100KB)
file_typeNoFile type: skill.md, claude.md, cursorrules, mdc-rule (default: skill.md)
nameNoOptional name for the scan result

Implementation Reference

  • The scanContent function is the main handler for the vigile_scan_content tool. It accepts content, fileType, and name parameters, constructs a request body, calls the Vigile API at /api/v1/scan/skill, and formats the response with trust scores, severity findings, and recommendations.
    export async function scanContent(
      baseUrl: string,
      apiKey: string,
      content: string,
      fileType?: string,
      name?: string
    ): Promise<string> {
      const body = {
        skill_name: name || "inline-scan",
        content,
        file_type: fileType || "skill.md",
        platform: "claude-code",
        source: "mcp-scan",
      };
    
      const { ok, status, data } = await fetchVigile(baseUrl, apiKey, "/api/v1/scan/skill", {
        method: "POST",
        body: JSON.stringify(body),
      });
    
      if (!ok) {
        if (status === 429) {
          return [
            "**Scan quota exceeded.**",
            "",
            data?.detail || "You've reached your monthly scan limit.",
            "",
            "Upgrade your plan at https://vigile.dev/pricing for more scans.",
          ].join("\n");
        }
        return `Scan failed: ${data?.detail || `HTTP ${status}`}`;
      }
    
      const emoji = trustLevelEmoji(data.trust_level);
      const lines = [
        `## ${emoji} Scan Result: ${data.skill_name || name || "Inline Scan"}`,
        "",
        `**Trust Score:** ${formatScore(data.trust_score)}`,
        `**Trust Level:** ${data.trust_level}`,
        `**File Type:** ${data.file_type}`,
        `**Findings:** ${data.findings_count} total (${data.critical_count} critical, ${data.high_count} high)`,
      ];
    
      // Detailed findings
      if (data.findings && data.findings.length > 0) {
        lines.push("", "### Findings");
        for (const f of data.findings) {
          const severity = f.severity === "critical" ? "πŸ”΄" : f.severity === "high" ? "🟠" : "🟑";
          lines.push(``, `#### ${severity} [${f.severity.toUpperCase()}] ${f.title}`);
          lines.push(f.description);
          if (f.evidence) {
            lines.push(`**Evidence:** \`${f.evidence}\``);
          }
          if (f.recommendation) {
            lines.push(`**Recommendation:** ${f.recommendation}`);
          }
        }
      } else {
        lines.push("", "βœ… No security findings detected.");
      }
    
      return lines.join("\n");
    }
  • Zod schema definition for vigile_scan_content tool inputs: content (required string, 1-100KB), file_type (optional string, max 30 chars), and name (optional string, max 200 chars).
    {
      content: z.string().min(1).max(100_000).describe("The raw text content to scan (max 100KB)"),
      file_type: z.string().min(1).max(30).optional().describe("File type: skill.md, claude.md, cursorrules, mdc-rule (default: skill.md)"),
      name: z.string().min(1).max(200).optional().describe("Optional name for the scan result"),
    },
  • src/index.ts:93-107 (registration)
    Registration of vigile_scan_content tool with the MCP server using server.tool(). Includes the tool name, description, input schema, and async handler that calls scanContent and returns the result as text content.
    // ── Tool: vigile_scan_content ──
    
    server.tool(
      "vigile_scan_content",
      "Scan the content of an agent skill file for security issues. Submit raw content from a claude.md, .cursorrules, skill.md, or similar file for analysis. Returns trust score and detailed findings.",
      {
        content: z.string().min(1).max(100_000).describe("The raw text content to scan (max 100KB)"),
        file_type: z.string().min(1).max(30).optional().describe("File type: skill.md, claude.md, cursorrules, mdc-rule (default: skill.md)"),
        name: z.string().min(1).max(200).optional().describe("Optional name for the scan result"),
      },
      async ({ content, file_type, name }) => {
        const result = await scanContent(API_BASE, API_KEY, content, file_type, name);
        return { content: [{ type: "text" as const, text: result }] };
      }
    );
  • Helper utilities used by scanContent: fetchVigile (async HTTP client with auth headers and error sanitization), trustLevelEmoji (maps trust levels to emoji indicators), and formatScore (formats numeric score as X/100).
    export async function fetchVigile(
      baseUrl: string,
      apiKey: string,
      path: string,
      options?: { method?: string; body?: string }
    ): Promise<{ ok: boolean; status: number; data: any }> {
      const headers: Record<string, string> = {
        "Content-Type": "application/json",
        "User-Agent": "vigile-mcp/0.1.7",
      };
    
      if (apiKey) {
        headers["Authorization"] = `Bearer ${apiKey}`;
      }
    
      try {
        const res = await fetch(`${baseUrl}${path}`, {
          method: options?.method || "GET",
          headers,
          body: options?.body,
        });
    
        const data = await res.json().catch(() => null);
        return { ok: res.ok, status: res.status, data };
      } catch (error: any) {
        // Sanitize error message β€” don't leak internal details like
        // hostnames, ports, file paths, or stack traces
        const rawMsg = error?.message || "Unknown error";
        const safeMsg = rawMsg.includes("ECONNREFUSED") || rawMsg.includes("ENOTFOUND")
          ? "API server unreachable"
          : rawMsg.includes("ETIMEDOUT") || rawMsg.includes("timeout")
          ? "Request timed out"
          : rawMsg.includes("ECONNRESET")
          ? "Connection reset"
          : "Connection failed";
        return {
          ok: false,
          status: 0,
          data: { detail: safeMsg },
        };
      }
    }
    
    export function trustLevelEmoji(level: string): string {
      switch (level) {
        case "trusted":
          return "🟒";
        case "caution":
          return "🟑";
        case "risky":
          return "🟠";
        case "dangerous":
          return "πŸ”΄";
        default:
          return "βšͺ";
      }
    }
    
    export function formatScore(score: number): string {
      return `${Math.round(score)}/100`;
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Vigile-ai/vigile-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server