Skip to main content
Glama

brand_audit_drift

Batch audit multiple content items to detect systematic brand drift. Scores each item against brand identity, computes corpus-level statistics, and identifies recurring patterns. Writes a detailed report to .brand/drift-report.md.

Instructions

Batch audit multiple content items to detect systematic brand drift. Scores each item against brand identity, computes corpus-level statistics (mean, median, stddev), and identifies recurring patterns across items (e.g., same off-palette color in 4/5 items). Writes a detailed drift report to .brand/drift-report.md. Use when reviewing a content corpus, auditing a website, or checking whether brand identity is being applied consistently across multiple pieces.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
itemsYesJSON array of content items to audit. Each item: {"content": "text or HTML or file path", "label": "descriptive name"}. Max 20 items. Example: '[{"content": "public/page.html", "label": "Homepage"}, {"content": "<p>Draft copy</p>", "label": "Email draft"}]'
thresholdNoMinimum acceptable score (0-100). Items below this are flagged as drifted. Default: 70.

Implementation Reference

  • Main handler function that performs the brand drift audit: loads brand context, parses input items, scores each item via scoreContent, computes corpus statistics, detects systematic drift patterns, writes a drift report, and returns results.
    async function handler(input: AuditDriftParams) {
      const brandDir = new BrandDir(process.cwd());
    
      if (!(await brandDir.exists())) {
        return buildResponse({
          what_happened: "No .brand/ directory found",
          next_steps: ["Run brand_start to create a brand system first"],
          data: { error: ERROR_CODES.NOT_INITIALIZED },
        });
      }
    
      let ctx;
      try {
        ctx = await loadBrandContext(brandDir);
      } catch {
        return buildResponse({
          what_happened: "Could not read brand identity data",
          next_steps: ["Run brand_extract_web to populate core identity"],
          data: { error: ERROR_CODES.NO_CORE_IDENTITY },
        });
      }
    
      // Parse items
      let items: DriftItem[];
      try {
        const parsed = JSON.parse(input.items);
        if (!Array.isArray(parsed)) throw new Error("Expected array");
        items = parsed.map((item: { content?: string; label?: string }, i: number) => ({
          label: item.label || `Item ${i + 1}`,
          content: item.content || (typeof item === "string" ? item : ""),
        }));
      } catch {
        return buildResponse({
          what_happened: "Could not parse items — expected a JSON array",
          next_steps: [
            'Provide items as a JSON array: [{"content": "...", "label": "Homepage"}, ...]',
          ],
          data: { error: ERROR_CODES.INVALID_ITEMS },
        });
      }
    
      if (items.length === 0) {
        return buildResponse({
          what_happened: "No items to audit",
          next_steps: ["Provide at least one content item"],
          data: { error: ERROR_CODES.EMPTY_ITEMS },
        });
      }
    
      // Cap at 20 items
      if (items.length > 20) {
        items = items.slice(0, 20);
      }
    
      // Score each item
      const scored: Array<{ label: string; score: ContentScore }> = [];
      for (const item of items) {
        const { content, isHtml } = await resolveContent(item.content);
        if (!content.trim()) continue;
        const result = scoreContent(content, isHtml, ctx);
        scored.push({ label: item.label, score: result });
      }
    
      if (scored.length === 0) {
        return buildResponse({
          what_happened: "All items were empty — nothing to audit",
          next_steps: ["Provide content items with actual text or HTML"],
          data: { error: ERROR_CODES.ALL_EMPTY },
        });
      }
    
      // Compute stats
      const overallScores = scored.map((s) => s.score.overall);
      const belowThreshold = overallScores.filter((s) => s < input.threshold).length;
    
      const corpusStats = {
        mean_score: Math.round(mean(overallScores)),
        median_score: Math.round(median(overallScores)),
        min_score: Math.min(...overallScores),
        max_score: Math.max(...overallScores),
        stddev: Math.round(stddev(overallScores) * 10) / 10,
        items_below_threshold: belowThreshold,
      };
    
      // Per-dimension averages
      const dimNames = ["token_compliance", "visual_compliance", "voice_alignment", "message_coverage"] as const;
      const perDimensionAverages: Record<string, number> = {};
      for (const dim of dimNames) {
        const scores = scored
          .map((s) => s.score.dimensions[dim]?.score)
          .filter((s): s is number => s !== undefined);
        if (scores.length > 0) {
          perDimensionAverages[dim] = Math.round(mean(scores));
        }
      }
    
      // Detect systematic drift
      const driftPatterns = detectDriftPatterns(scored);
    
      // Build per-item results (compact for response)
      const itemResults: ItemResult[] = scored.map((s) => ({
        label: s.label,
        overall_score: s.score.overall,
        below_threshold: s.score.overall < input.threshold,
        dimensions: Object.fromEntries(
          Object.entries(s.score.dimensions)
            .filter(([, v]) => v !== undefined)
            .map(([k, v]) => [k, v!.score])
        ),
        top_issues: s.score.issues
          .filter((i) => i.severity !== "info")
          .slice(0, 3)
          .map((i) => i.message),
      }));
    
      // Write drift report
      const report = buildDriftReport(itemResults, corpusStats, driftPatterns, input.threshold);
      await brandDir.writeMarkdown("drift-report.md", report);
    
      // Build summary
      const driftSummary = driftPatterns.length > 0
        ? ` Systematic drift in ${driftPatterns.map((p) => p.dimension.replace(/_/g, " ")).join(", ")}.`
        : "";
    
      return buildResponse({
        what_happened: `Drift audit: ${belowThreshold}/${scored.length} items below threshold (${input.threshold}). Mean: ${corpusStats.mean_score}/100.${driftSummary}`,
        next_steps: belowThreshold > 0
          ? [
              `${belowThreshold} item(s) scored below ${input.threshold} — review .brand/drift-report.md`,
              ...(driftPatterns.length > 0
                ? [`Systematic drift detected — fix the root cause, not individual items`]
                : []),
              "Run brand_write before creating new content to refresh brand context",
            ]
          : ["All items above threshold — brand identity is being applied consistently"],
        data: {
          items_audited: scored.length,
          threshold: input.threshold,
          items_below_threshold: belowThreshold,
          corpus_stats: corpusStats,
          per_dimension_averages: perDimensionAverages,
          systematic_drift: driftPatterns.slice(0, 5),
          items: itemResults,
          report_file: ".brand/drift-report.md",
        } as unknown as Record<string, unknown>,
      });
    }
  • Input schema definition using Zod: items (JSON string array of content with labels) and threshold (number 0-100, default 70) for minimum acceptable score.
    const paramsShape = {
      items: z
        .string()
        .describe(
          'JSON array of content items to audit. Each item: {"content": "text or HTML or file path", "label": "descriptive name"}. Max 20 items. Example: \'[{"content": "public/page.html", "label": "Homepage"}, {"content": "<p>Draft copy</p>", "label": "Email draft"}]\''
        ),
      threshold: z
        .number()
        .min(0)
        .max(100)
        .default(70)
        .describe(
          "Minimum acceptable score (0-100). Items below this are flagged as drifted. Default: 70."
        ),
    };
    
    const ParamsSchema = z.object(paramsShape);
    type AuditDriftParams = z.infer<typeof ParamsSchema>;
  • Registration function that registers 'brand_audit_drift' tool on the MCP server with description and parameter schema.
    export function register(server: McpServer) {
      server.tool(
        "brand_audit_drift",
        "Batch audit multiple content items to detect systematic brand drift. Scores each item against brand identity, computes corpus-level statistics (mean, median, stddev), and identifies recurring patterns across items (e.g., same off-palette color in 4/5 items). Writes a detailed drift report to .brand/drift-report.md. Use when reviewing a content corpus, auditing a website, or checking whether brand identity is being applied consistently across multiple pieces.",
        paramsShape,
        async (args) => {
          const parsed = safeParseParams(ParamsSchema, args);
          if (!parsed.success) return parsed.response;
          return handler(parsed.data);
        },
      );
    }
  • src/server.ts:34-34 (registration)
    Import of the register function from brand-audit-drift.ts in the server file.
    import { register as registerAuditDrift } from "./tools/brand-audit-drift.js";
  • src/server.ts:93-93 (registration)
    Invocation of registerAuditDrift(server) to register the tool on the MCP server.
    registerAuditDrift(server);     // Batch drift detection
  • Helper function to resolve content - reads file if it's a valid file path (HTML, MD, TXT) within cwd, otherwise treats input as raw content.
    async function resolveContent(input: string): Promise<{ content: string; isHtml: boolean }> {
      // File path must be within cwd to prevent arbitrary file reads
      if (/\.(html?|md|txt)$/i.test(input.trim()) && !input.includes("\n") && input.length < 500) {
        const { resolve } = await import("node:path");
        const resolvedPath = resolve(process.cwd(), input.trim());
        if (isPathWithinBase(resolvedPath, process.cwd())) {
          try {
            const content = await readFile(resolvedPath, "utf-8");
            return { content, isHtml: /\.html?$/i.test(input.trim()) || isHtmlContent(content) };
          } catch { /* not a file */ }
        }
      }
      return { content: input, isHtml: isHtmlContent(input) };
    }
  • Detects systematic drift patterns across items - issues appearing in >50% of items or dimensions consistently scoring below 60.
    function detectDriftPatterns(
      results: Array<{ label: string; score: ContentScore }>,
    ): DriftPattern[] {
      const patterns: DriftPattern[] = [];
      const total = results.length;
      if (total < 2) return patterns;
    
      // Collect all issues across items, grouped by dimension+message
      const issueMap = new Map<string, { count: number; dimension: string; message: string; labels: string[] }>();
    
      for (const r of results) {
        for (const issue of r.score.issues) {
          const key = `${issue.dimension}:${issue.message}`;
          const entry = issueMap.get(key) || { count: 0, dimension: issue.dimension, message: issue.message, labels: [] };
          entry.count++;
          entry.labels.push(r.label);
          issueMap.set(key, entry);
        }
      }
    
      // Issues appearing in >50% of items are systematic
      for (const [, entry] of issueMap) {
        if (entry.count >= Math.ceil(total * 0.5) && entry.count >= 2) {
          patterns.push({
            dimension: entry.dimension,
            pattern: entry.message,
            affected_items: entry.count,
            total_items: total,
            detail: `Affects: ${entry.labels.slice(0, 3).join(", ")}${entry.labels.length > 3 ? ` (+${entry.labels.length - 3} more)` : ""}`,
          });
        }
      }
    
      // Check for dimensions that are consistently low across items
      const dimNames = ["token_compliance", "visual_compliance", "voice_alignment", "message_coverage"] as const;
      for (const dim of dimNames) {
        const scores = results
          .map((r) => r.score.dimensions[dim]?.score)
          .filter((s): s is number => s !== undefined);
        if (scores.length >= 2) {
          const avg = mean(scores);
          if (avg < 60) {
            patterns.push({
              dimension: dim,
              pattern: `Consistently low ${dim.replace(/_/g, " ")}`,
              affected_items: scores.filter((s) => s < 60).length,
              total_items: scores.length,
              detail: `Average: ${Math.round(avg)}/100`,
            });
          }
        }
      }
    
      return patterns;
    }
  • Builds a markdown drift report with summary stats, systematic patterns, per-item scores, and details on below-threshold items.
    function buildDriftReport(
      itemResults: ItemResult[],
      corpusStats: Record<string, unknown>,
      driftPatterns: DriftPattern[],
      threshold: number,
    ): string {
      const lines: string[] = [
        "# Brand Drift Report",
        "",
        `**Generated:** ${new Date().toISOString().split("T")[0]}`,
        `**Items audited:** ${itemResults.length}`,
        `**Threshold:** ${threshold}`,
        "",
        "## Summary",
        "",
        `| Metric | Value |`,
        `|--------|-------|`,
        `| Mean score | ${(corpusStats.mean_score as number)}/100 |`,
        `| Median score | ${(corpusStats.median_score as number)}/100 |`,
        `| Items below threshold | ${(corpusStats.items_below_threshold as number)}/${itemResults.length} |`,
        "",
      ];
    
      if (driftPatterns.length > 0) {
        lines.push("## Systematic Drift Patterns", "");
        for (const p of driftPatterns) {
          lines.push(`- **${p.dimension}**: ${p.pattern} (${p.affected_items}/${p.total_items} items)`);
          lines.push(`  - ${p.detail}`);
        }
        lines.push("");
      }
    
      lines.push("## Per-Item Scores", "");
      lines.push("| Item | Score | Status |");
      lines.push("|------|-------|--------|");
      for (const item of itemResults) {
        const status = item.below_threshold ? "BELOW" : "OK";
        lines.push(`| ${item.label} | ${item.overall_score}/100 | ${status} |`);
      }
      lines.push("");
    
      // Detail section for below-threshold items
      const belowItems = itemResults.filter((i) => i.below_threshold);
      if (belowItems.length > 0) {
        lines.push("## Items Below Threshold", "");
        for (const item of belowItems) {
          lines.push(`### ${item.label} (${item.overall_score}/100)`, "");
          if (item.top_issues.length > 0) {
            for (const issue of item.top_issues) {
              lines.push(`- ${issue}`);
            }
            lines.push("");
          }
        }
      }
    
      return lines.join("\n");
    }
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description fully discloses key behaviors: scores items, computes statistics, identifies patterns, and writes a drift report to .brand/drift-report.md. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that are front-loaded with purpose, then output details, then usage guidance. Every sentence is necessary and no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks description of return value or response format. Since there is no output schema, the description should clarify whether the tool returns the report content or just a confirmation. Also no mention of error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description does not add any additional meaning beyond the schema's descriptions for 'items' and 'threshold'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'audit' and resource 'content items' with clear focus on 'systematic brand drift'. It distinguishes from sibling tools like brand_audit by emphasizing batch processing and corpus-level statistics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'reviewing a content corpus, auditing a website, or checking consistency.' However, it does not mention when not to use or provide alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Brandcode-Studio/brandsystem-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server