Skip to main content
Glama
bezata

kObsidian MCP

Lint Wiki

wiki.lint
Read-onlyIdempotent

Health-check your Obsidian wiki to detect orphans, broken links, stale sources, missing concept pages, singleton tags, and index.md parity issues. Get grouped findings with totals.

Instructions

Health-check the wiki: orphans, broken links, stale sources, missing concept pages, singleton tags, and index.md parity. Read-only; returns grouped findings with totals.

Operates on the session-active vault (see vault.current — selectable via vault.select) unless an explicit vaultPath argument is passed, which always wins.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
staleDaysNo
wikiRootNo
vaultPathNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Main handler function `lintWiki` that performs the wiki linting logic: checks for orphans, broken links, stale pages, missing concept/entity/source pages, singleton tags, and index.md parity. Returns structured findings with totals.
    export async function lintWiki(context: DomainContext, args: WikiLintArgs) {
      const paths = resolveWikiPaths(context, args);
      const staleDays = args.staleDays ?? context.env.KOBSIDIAN_WIKI_STALE_DAYS;
      const noteIndex: NoteIndex = await collectNoteIndex(paths.vaultRoot);
    
      const wikiFiles = await loadWikiFiles(paths);
      const wikiPages = wikiFiles.filter(
        (file) =>
          file.classification === "source" ||
          file.classification === "concept" ||
          file.classification === "entity",
      );
    
      const pagePaths = new Set(wikiPages.map((file) => file.relative));
      const inlinks = new Map<string, Set<string>>();
      const outlinks = new Map<string, Set<string>>();
      for (const page of wikiPages) {
        inlinks.set(page.relative, new Set());
        outlinks.set(page.relative, new Set());
      }
    
      const brokenLinks: BrokenLinkFinding[] = [];
      const missingPages: MissingPageFinding[] = [];
    
      for (const file of wikiFiles) {
        const sourceOutlinks = outlinks.get(file.relative);
        for (const link of extractLinksFromContent(file.content)) {
          if (link.type !== "wiki") continue;
          const normalized = normalizeLinkTarget(link.path);
          const resolved = resolveIndexedLink(normalized, noteIndex);
          if (!resolved) {
            brokenLinks.push({
              sourcePath: file.relative,
              brokenLink: normalized,
              displayText: link.displayText,
            });
            if (isInsideWiki(paths, normalized)) {
              missingPages.push({
                sourcePath: file.relative,
                target: normalized,
                suggestedKind: classifyMissingTarget(paths, normalized),
              });
            }
            continue;
          }
          if (pagePaths.has(resolved) && sourceOutlinks) {
            sourceOutlinks.add(resolved);
            inlinks.get(resolved)?.add(file.relative);
          }
        }
      }
    
      const orphans: OrphanFinding[] = [];
      for (const page of wikiPages) {
        const inCount = inlinks.get(page.relative)?.size ?? 0;
        const outCount = outlinks.get(page.relative)?.size ?? 0;
        if (inCount === 0 && outCount === 0) {
          orphans.push({ path: page.relative, reason: "No inbound or outbound wiki links" });
        }
      }
    
      const stale: StaleFinding[] = [];
      for (const page of wikiPages) {
        const dateField = page.classification === "source" ? "ingested_at" : "updated";
        const dateValue = coerceDateString(page.frontmatter[dateField]);
        if (!dateValue) continue;
        const age = ageInDays(dateValue);
        if (age > staleDays) {
          stale.push({ path: page.relative, kind: page.classification, ageDays: age, dateField });
        }
      }
    
      const tagPages = new Map<string, string[]>();
      for (const page of wikiPages) {
        for (const tag of getFrontmatterTags(page.frontmatter)) {
          const list = tagPages.get(tag) ?? [];
          list.push(page.relative);
          tagPages.set(tag, list);
        }
      }
      const tagSingletons: TagSingletonFinding[] = [];
      for (const [tag, pagesWithTag] of tagPages.entries()) {
        if (pagesWithTag.length === 1 && pagesWithTag[0]) {
          tagSingletons.push({ tag, page: pagesWithTag[0] });
        }
      }
      tagSingletons.sort((left, right) => left.tag.localeCompare(right.tag));
    
      const indexTargets = await readIndexLinkTargets(paths);
      const missingFromIndex: string[] = [];
      for (const page of wikiPages) {
        const normalized = normalizeLinkTarget(page.relative);
        const stem = path.basename(page.relative, ".md");
        const inIndex =
          indexTargets.has(normalized) ||
          indexTargets.has(stem) ||
          indexTargets.has(page.relative) ||
          indexTargets.has(`${page.relative.replace(/\.md$/, "")}`);
        if (!inIndex) missingFromIndex.push(page.relative);
      }
      const staleEntries: string[] = [];
      for (const target of indexTargets) {
        if (!resolveIndexedLink(target, noteIndex)) staleEntries.push(target);
      }
    
      const findings: WikiLintFindings = {
        orphans,
        brokenLinks,
        stale,
        missingPages,
        tagSingletons,
        indexMismatch: { missingFromIndex, staleEntries },
      };
    
      const totals = {
        orphans: orphans.length,
        brokenLinks: brokenLinks.length,
        stale: stale.length,
        missingPages: missingPages.length,
        tagSingletons: tagSingletons.length,
        indexMissingFromIndex: missingFromIndex.length,
        indexStaleEntries: staleEntries.length,
        all:
          orphans.length +
          brokenLinks.length +
          stale.length +
          missingPages.length +
          tagSingletons.length +
          missingFromIndex.length +
          staleEntries.length,
      };
    
      const summary =
        totals.all === 0
          ? `Wiki clean (${wikiPages.length} pages scanned)`
          : `Wiki lint: ${totals.all} findings across ${wikiPages.length} pages`;
    
      return {
        changed: false,
        target: paths.rootRelative,
        summary,
        pagesScanned: wikiPages.length,
        staleDays,
        totals,
        findings,
      };
    }
  • Input schema `wikiLintArgsSchema` for the wiki.lint tool: accepts optional `staleDays` (int 1-3650), `wikiRoot` override, and `vaultPath`.
    export const wikiLintArgsSchema = z.object({
      staleDays: z.number().int().min(1).max(3650).optional(),
      wikiRoot: wikiRootOverrideSchema,
      vaultPath: z.string().optional(),
    });
    export type WikiLintArgs = z.input<typeof wikiLintArgsSchema>;
  • Registration of the `wiki.lint` tool definition in the `wikiTools` array, mapping it to the `lintWiki` handler with READ_ONLY annotation.
    {
      name: "wiki.lint",
      title: "Lint Wiki",
      description:
        "Health-check the wiki: orphans, broken links, stale sources, missing concept pages, singleton tags, and index.md parity. Read-only; returns grouped findings with totals.",
      inputSchema: wikiLintArgsSchema,
      outputSchema: looseObjectSchema,
      annotations: READ_ONLY,
      handler: (context, args) => lintWiki(context, args as Parameters<typeof lintWiki>[1]),
    },
  • Helper functions `loadWikiFiles`, `readIndexLinkTargets`, `classifyMissingTarget`, `coerceDateString`, and `ageInDays` used by the lintWiki handler to load wiki files, check index links, classify missing targets, and calculate staleness.
    async function loadWikiFiles(paths: WikiPaths): Promise<WikiFile[]> {
      if (!(await fileExists(paths.rootAbsolute))) return [];
      const absolutes = await walkMarkdownFiles(paths.rootAbsolute);
      const files = await Promise.all(
        absolutes.map(async (absolute): Promise<WikiFile | null> => {
          const relative = toVaultRelativePath(paths.vaultRoot, absolute);
          if (!isInsideWiki(paths, relative)) return null;
          const classification = classifyWikiPath(paths, relative);
          if (classification === "schema") return null;
          const content = await readUtf8(absolute);
          const parsed = parseFrontmatter(content);
          return {
            absolute,
            relative,
            classification,
            content,
            frontmatter: parsed.data,
            body: parsed.content,
          };
        }),
      );
      return files.filter((file): file is WikiFile => file !== null);
    }
  • Type definitions for `WikiLintFindings` including orphans, brokenLinks, stale, missingPages, tagSingletons, and indexMismatch.
    export type WikiLintFindings = {
      orphans: OrphanFinding[];
      brokenLinks: BrokenLinkFinding[];
      stale: StaleFinding[];
      missingPages: MissingPageFinding[];
      tagSingletons: TagSingletonFinding[];
      indexMismatch: IndexMismatch;
    };
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context beyond annotations by stating the tool is read-only and returns grouped findings with totals. It also specifies the vault selection behavior. No contradictions with annotations. The level of detail is sufficient for understanding side effects and limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two focused paragraphs. The first paragraph efficiently lists checks and return type. The second paragraph explains vault targeting. The structure is logical, though the parameter mapping could be more explicit.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown), the description covers key functional aspects: scope, output nature, and vault handling. It omits some parameter constraints, but those are available in the input schema. Overall adequate for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema parameter descriptions, the description partially clarifies vaultPath and implies staleDays via 'stale sources', but wikiRoot is unexplained. The meaning of staleDays (days threshold) and wikiRoot (root directory) is not fully conveyed, leaving some ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the tool's function as a health-check for the wiki, listing concrete checks like orphans, broken links, stale sources, etc. It clearly distinguishes from sibling tools (e.g., wiki.ingest, wiki.init) by focusing on auditing rather than creation or modification.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains how the tool selects the vault (session-active by default, overrideable via vaultPath), providing clear guidance on scope. However, it does not explicitly state when to use this tool versus related siblings (e.g., links.health for link-specific checks), nor does it mention when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bezata/kObsidian'

If you have feedback or need assistance with the MCP directory API, please join our Discord server