Skip to main content
Glama

server_audit

Read-onlyIdempotent

Run a security audit on a server. Scans 30 categories with 457 checks, returns a score from 0 to 100, per-category scores, and quick wins.

Instructions

Run a security audit on a server. Scans 30 categories with 457 checks. Returns score (0-100), per-category scores, and quick wins. Formats: 'summary' (compact text), 'json' (full AuditResult), 'score' (number only). Supports compliance filtering (cis-level1, cis-level2, pci-dss, hipaa), category/severity filtering, snapshot save/compare, threshold gate, and profile filtering. Requires SSH access. For health trends use server_doctor instead.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
serverNoServer name or IP. Auto-selected if only one server exists.
formatNoOutput format: summary (default), json (full result), score (number only)summary
frameworkNoCompliance framework filter. Returns per-control pass/fail summary alongside audit results.
explainNoWhen true, include why + fix explanation for each failing check in summary format output. Capped at 10 checks.
categoryNoFilter results to a specific category (e.g. 'SSH', 'Firewall', 'Docker').
severityNoFilter checks by severity level.
snapshotNoSave audit snapshot. true for auto-name, string for custom name.
compareNoCompare two snapshots: format before:after (e.g. pre-upgrade:latest)
thresholdNoMinimum passing score (1-100). Returns error if score is below threshold.
profileNoServer profile filter (web-server, database, mail-server).

Implementation Reference

  • Main handler function for the server_audit tool. Resolves the target server, runs a 457-check security audit, applies filtering (category/severity/profile), supports compare mode (snapshot diff), threshold gating, compliance framework filtering, baseline regression tracking, and outputs in summary/json/score formats.
    export async function handleServerAudit(params: {
      server?: string;
      format?: "summary" | "json" | "score";
      framework?: "cis-level1" | "cis-level2" | "pci-dss" | "hipaa";
      explain?: boolean;
      category?: string;
      severity?: "critical" | "warning" | "info";
      snapshot?: boolean | string;
      compare?: string;
      threshold?: number;
      profile?: string;
    }, mcpServer?: McpServer): Promise<McpResponse> {
      try {
        const servers = getServers();
        if (servers.length === 0) {
          return mcpError("No servers found", undefined, [
            { command: "kastell add", reason: "Add a server first" },
          ]);
        }
    
        const server = resolveServerForMcp(params, servers);
        if (!server) {
          if (params.server) {
            return mcpError(
              `Server not found: ${params.server}`,
              `Available servers: ${servers.map((s) => s.name).join(", ")}`,
            );
          }
          return mcpError(
            "Multiple servers found. Specify which server to audit.",
            `Available: ${servers.map((s) => s.name).join(", ")}`,
          );
        }
    
        // Compare mode: early return with diff of two snapshots
        if (params.compare !== undefined) {
          const parts = params.compare.split(":");
          if (parts.length !== 2) {
            return mcpError(
              "--compare requires format: before:after",
              "Example: server_audit { server: 'my-server', compare: 'pre-upgrade:latest' }",
            );
          }
          const [beforeRef, afterRef] = parts;
          const beforeSnap = await resolveSnapshotRef(server.ip, beforeRef);
          if (!beforeSnap) {
            return mcpError(`Snapshot not found: '${beforeRef}'`, "Use server_audit with snapshot:true to create one");
          }
          const afterSnap = await resolveSnapshotRef(server.ip, afterRef);
          if (!afterSnap) {
            return mcpError(`Snapshot not found: '${afterRef}'`, "Use server_audit with snapshot:true to create one");
          }
          const diff = diffAudits(beforeSnap.audit, afterSnap.audit, { before: beforeRef, after: afterRef });
          const format = params.format ?? "summary";
          if (format === "json") {
            return mcpSuccess(diff as unknown as Record<string, unknown>);
          }
          return mcpSuccess({ summary: formatDiffJson(diff), scoreDelta: diff.scoreDelta });
        }
    
        await mcpLog(mcpServer, `Starting 457-check audit on ${server.name}`);
    
        const platform = server.platform ?? server.mode ?? "bare";
        const result = await runAudit(server.ip, server.name, platform);
    
        if (!result.success || !result.data) {
          return mcpError(
            result.error ?? "Audit failed",
            result.hint,
          );
        }
    
        const auditResult = result.data;
        await mcpLog(mcpServer, `Audit complete, score: ${auditResult.overallScore}`);
    
        const baseline = loadBaseline(auditResult.serverIp);
        const passedIds = extractPassedCheckIds(auditResult);
        const regression = baseline ? checkRegression(baseline, auditResult, passedIds) : null;
    
        if (shouldUpdateBaseline(regression, false)) {
          await saveBaselineSafe(auditResult, baseline, passedIds);
        }
    
        // Apply category/severity filter if provided
        let filteredResult = auditResult;
        if (params.category || params.severity) {
          const filter: AuditFilter = {};
          if (params.category) filter.category = params.category;
          if (params.severity) filter.severity = params.severity;
          filteredResult = filterAuditResult(auditResult, filter);
        }
    
        // Apply profile filter after category/severity filter
        if (params.profile !== undefined) {
          if (!isValidProfile(params.profile)) {
            return mcpError(
            `Invalid profile: ${params.profile}`,
            `Available profiles: ${listAllProfileNames().join(", ")}`,
            );
          }
          filteredResult = {
            ...filteredResult,
            categories: filteredResult.categories
              .map((cat) => ({
                ...cat,
                checks: filterChecksByProfile(cat.checks, params.profile!),
              }))
              .filter((cat) => cat.checks.length > 0),
          };
        }
    
        // Threshold check (uses unfiltered auditResult.overallScore)
        if (params.threshold !== undefined && auditResult.overallScore < params.threshold) {
          return mcpError(
            `Score ${auditResult.overallScore} is below threshold ${params.threshold}`,
            "Run server_fix to improve the score",
            [{ command: "server_fix { server: '" + server.name + "', dryRun: true }", reason: "Preview available fixes" }],
          );
        }
    
        const format = params.format ?? "summary";
    
        if (format === "json") {
          const jsonResult: Record<string, unknown> = { ...filteredResult };
          if (params.framework) {
            const fw = FRAMEWORK_KEY_MAP[params.framework];
            const detail = calculateComplianceDetail(filteredResult.categories);
            jsonResult.complianceDetail = detail.filter((d) => d.framework === fw);
          }
          if (regression) {
            jsonResult.baselineRegression = regression;
          }
          return mcpSuccess(jsonResult, { largeResult: true });
        }
    
        if (format === "score") {
          return mcpSuccess({ score: auditResult.overallScore });
        }
    
        // summary format: compact text for AI consumption
        const categoryLines = filteredResult.categories.map(
          (c) => `  ${c.name}: ${c.score}/${c.maxScore}`,
        );
    
        const quickWinLines = filteredResult.quickWins.slice(0, 3).map(
          (qw) => `  - ${qw.description} (${qw.currentScore} -> ${qw.projectedScore})`,
        );
    
        const summaryParts = [
          `Server: ${filteredResult.serverName} (${filteredResult.serverIp})`,
          `Platform: ${filteredResult.platform}`,
          `Overall Score: ${auditResult.overallScore}/100`,
          "",
          "Categories:",
          ...categoryLines,
        ];
    
        if (quickWinLines.length > 0) {
          summaryParts.push("", "Top Quick Wins:", ...quickWinLines);
        }
    
        // Add compliance summary when framework param provided
        if (params.framework) {
          const fw = FRAMEWORK_KEY_MAP[params.framework];
          const detail = calculateComplianceDetail(filteredResult.categories);
          const fwScore = detail.find((d) => d.framework === fw);
          if (fwScore) {
            summaryParts.push(
              "",
              `Compliance (${fwScore.version}):`,
              `  Pass Rate: ${fwScore.passedControls}/${fwScore.totalControls} (${fwScore.passRate}%)`,
              `  Failing: ${fwScore.controls.filter((c) => !c.passed).length} controls`,
            );
          }
        }
    
        // Explain: append failing check details when explain param is set (summary format only)
        if (params.explain) {
          const failingChecks = filteredResult.categories
            .flatMap((c) => c.checks)
            .filter((ch) => !ch.passed && ch.explain);
          if (failingChecks.length > 0) {
            summaryParts.push("", "Failing Checks (with explanations):");
            const maxDisplay = 10;
            for (const ch of failingChecks.slice(0, maxDisplay)) {
              summaryParts.push(`  [${ch.severity}] ${ch.id}: ${ch.name}`);
              summaryParts.push(`    Why: ${ch.explain}`);
            }
            if (failingChecks.length > maxDisplay) {
              summaryParts.push(`  ... and ${failingChecks.length - maxDisplay} more failing checks`);
            }
          }
        }
    
        // Baseline regression info
        if (regression) {
          const regressionLines = formatRegressionSummary(regression);
          summaryParts.push("", ...regressionLines.map(l => l.text));
        }
    
        summaryParts.push(
          "",
          `Timestamp: ${filteredResult.timestamp}`,
        );
    
        const responseData: Record<string, unknown> = {
          summary: summaryParts.join("\n"),
          overallScore: auditResult.overallScore,
          suggested_actions: [
            { command: `server_audit { server: '${server.name}', format: 'json' }`, reason: "Get full audit details" },
          ],
        };
    
        if (filteredResult.skippedCategories && filteredResult.skippedCategories.length > 0) {
          responseData.skippedCategories = filteredResult.skippedCategories;
        }
    
        // Save snapshot if requested (uses unfiltered auditResult)
        if (params.snapshot !== undefined) {
          const snapshotName = typeof params.snapshot === "string" ? params.snapshot : undefined;
          await saveSnapshot(auditResult, snapshotName);
        }
    
        return mcpSuccess(responseData, { largeResult: true });
      } catch (error: unknown) {
        return mcpError(sanitizeStderr(getErrorMessage(error)));
      }
    }
  • Zod schema defining all input parameters for server_audit: server, format, framework, explain, category, severity, snapshot, compare, threshold, and profile.
    export const serverAuditSchema = {
      server: z.string().optional().describe("Server name or IP. Auto-selected if only one server exists."),
      format: z.enum(["summary", "json", "score"]).default("summary")
        .describe("Output format: summary (default), json (full result), score (number only)"),
      framework: z.enum(["cis-level1", "cis-level2", "pci-dss", "hipaa"]).optional()
        .describe("Compliance framework filter. Returns per-control pass/fail summary alongside audit results."),
      explain: z.boolean().optional().describe(
        "When true, include why + fix explanation for each failing check in summary format output. Capped at 10 checks."
      ),
      category: z.string().optional().describe("Filter results to a specific category (e.g. 'SSH', 'Firewall', 'Docker')."),
      severity: z.enum(["critical", "warning", "info"]).optional().describe("Filter checks by severity level."),
      snapshot: z.union([z.boolean(), z.string()]).optional()
        .describe("Save audit snapshot. true for auto-name, string for custom name."),
      compare: z.string().optional().describe("Compare two snapshots: format before:after (e.g. pre-upgrade:latest)"),
      threshold: z.number().int().min(1).max(100).optional()
        .describe("Minimum passing score (1-100). Returns error if score is below threshold."),
      profile: z.string().optional().describe("Server profile filter (web-server, database, mail-server)."),
    };
  • Registration of the server_audit tool on the MCP server, wiring the schema and handler with metadata annotations (readOnly, non-destructive, idempotent).
    server.registerTool("server_audit", {
      description:
        "Run a security audit on a server. Scans 30 categories with 457 checks. Returns score (0-100), per-category scores, and quick wins. Formats: 'summary' (compact text), 'json' (full AuditResult), 'score' (number only). Supports compliance filtering (cis-level1, cis-level2, pci-dss, hipaa), category/severity filtering, snapshot save/compare, threshold gate, and profile filtering. Requires SSH access. For health trends use server_doctor instead.",
      inputSchema: serverAuditSchema,
      annotations: {
        title: "Server Security Audit",
        readOnlyHint: true,
        destructiveHint: false,
        idempotentHint: true,
        openWorldHint: true,
      },
    }, async (params) => {
      return handleServerAudit(params, server);
    });
  • Imports from the core audit module and MCP utilities that support the server_audit handler: runAudit, filterAuditResult, snapshot diff/save, compliance scoring, profile filtering, and baseline regression tracking.
    import { z } from "zod";
    import { getServers } from "../../utils/config.js";
    import { runAudit } from "../../core/audit/index.js";
    import { filterAuditResult } from "../../core/audit/filter.js";
    import { resolveSnapshotRef, diffAudits, formatDiffJson } from "../../core/audit/diff.js";
    import { saveSnapshot } from "../../core/audit/snapshot.js";
    import type { AuditFilter } from "../../core/audit/filter.js";
    import {
      resolveServerForMcp,
      mcpSuccess,
      mcpError,
      mcpLog,
      type McpResponse,
    } from "../utils.js";
    import { getErrorMessage, sanitizeStderr } from "../../utils/errorMapper.js";
    import { calculateComplianceDetail } from "../../core/audit/compliance/scoring.js";
    import { FRAMEWORK_KEY_MAP } from "../../core/audit/compliance/types.js";
    import type { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
    import { filterChecksByProfile, isValidProfile, listAllProfileNames } from "../../core/audit/profiles.js";
    import { saveBaselineSafe, loadBaseline, checkRegression, formatRegressionSummary, extractPassedCheckIds, shouldUpdateBaseline } from "../../core/audit/regression.js";
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly and idempotent behavior. The description adds context about what the tool returns (score, per-category scores, quick wins) and formats, but does not contradict annotations. It could mention if any side effects occur, but none exist, so it's transparent enough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured paragraph that front-loads the core purpose, then lists features efficiently. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having 10 parameters and no output schema, the description covers the overall functionality, output types, filtering capabilities, and mentions an alternative tool. It provides sufficient context for an agent to understand and use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds extra context for several parameters, e.g., 'Auto-selected if only one server exists' for server, 'Returns per-control pass/fail summary' for framework, and 'Returns error if score is below threshold' for threshold.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run a security audit on a server.' It details the scope (30 categories, 457 checks), output formats, and distinguishes from sibling tool server_doctor, which handles health trends.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly mentions a prerequisite ('Requires SSH access') and provides an alternative usage scenario ('For health trends use server_doctor instead'), helping the agent decide when to use this tool vs others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kastelldev/kastell'

If you have feedback or need assistance with the MCP directory API, please join our Discord server