Skip to main content
Glama
libra850
by libra850

analyze_backlinks

Analyze backlinks and understand graph structure for a specific note in Obsidian vaults to visualize connections and relationships.

Instructions

バックリンク分析とグラフ構造の把握を行います

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
targetNoteYes分析対象のノートパス(vault相対パス)

Implementation Reference

  • Core handler function that implements the analyze_backlinks tool logic: scans all vault .md files for incoming wiki links [[target]] and markdown links to the target note, extracts context snippets, identifies related notes, and computes basic graph metrics.
    async analyzeBacklinks(targetNote: string): Promise<{
      targetNote: string;
      backlinks: Array<{
        sourceFile: string;
        context: string;
        linkType: 'wiki' | 'markdown';
      }>;
      relatedNotes: string[];
      metrics: {
        popularity: number;
        centrality: number;
      };
    }> {
      if (!FileUtils.validatePath(this.config.vaultPath, targetNote)) {
        throw new Error('無効なターゲットノートパスです');
      }
    
      const targetFullPath = path.join(this.config.vaultPath, targetNote);
      if (!(await FileUtils.fileExists(targetFullPath))) {
        throw new Error(`ターゲットノート '${targetNote}' が見つかりません`);
      }
    
      const vaultPath = this.config.vaultPath;
      const backlinks: Array<{
        sourceFile: string;
        context: string;
        linkType: 'wiki' | 'markdown';
      }> = [];
    
      // ターゲットノートのベース名(拡張子なし)
      const targetBaseName = path.basename(targetNote, '.md');
      
      // すべてのMarkdownファイルを取得
      const allMdFiles = await this.getAllMdFiles(vaultPath);
    
      const processFile = async (filePath: string) => {
        try {
          const content = await fs.readFile(filePath, 'utf-8');
          const lines = content.split('\n');
          const relativeFilePath = path.relative(vaultPath, filePath);
    
          // ターゲットノート自体はスキップ
          if (path.resolve(filePath) === path.resolve(targetFullPath)) {
            return;
          }
    
          for (let lineIndex = 0; lineIndex < lines.length; lineIndex++) {
            const line = lines[lineIndex];
            
            // ウィキリンク [[]] の検出
            const wikiLinkMatches = line.matchAll(/\[\[([^\]|]+)(\|[^\]]+)?\]\]/g);
            for (const match of wikiLinkMatches) {
              const linkTarget = match[1];
              
              if (linkTarget === targetBaseName || linkTarget === targetNote || linkTarget === targetNote.replace(/\.md$/, '')) {
                const context = this.extractContext(lines, lineIndex);
                backlinks.push({
                  sourceFile: relativeFilePath,
                  context,
                  linkType: 'wiki',
                });
              }
            }
    
            // Markdownリンク []() の検出
            const mdLinkMatches = line.matchAll(/\[([^\]]+)\]\(([^)]+)\)/g);
            for (const match of mdLinkMatches) {
              const linkTarget = match[2];
              
              if (linkTarget.includes(targetBaseName) || linkTarget.includes(targetNote)) {
                const context = this.extractContext(lines, lineIndex);
                backlinks.push({
                  sourceFile: relativeFilePath,
                  context,
                  linkType: 'markdown',
                });
              }
            }
          }
        } catch (error) {
          // ファイル読み込みエラーは無視
        }
      };
    
      // すべてのMarkdownファイルを処理
      for (const filePath of allMdFiles) {
        await processFile(filePath);
      }
    
      // 関連ノートを計算(バックリンクを持つファイル)
      const relatedNotes = [...new Set(backlinks.map(bl => bl.sourceFile))];
    
      // メトリクスを計算
      const popularity = backlinks.length;
      const centrality = this.calculateCentrality(backlinks, allMdFiles.length);
    
      return {
        targetNote,
        backlinks,
        relatedNotes,
        metrics: {
          popularity,
          centrality,
        },
      };
    }
  • src/server.ts:200-213 (registration)
    Registration of the 'analyze_backlinks' tool in the MCP server's tool list, including name, description, and input schema.
    {
      name: 'analyze_backlinks',
      description: 'バックリンク分析とグラフ構造の把握を行います',
      inputSchema: {
        type: 'object',
        properties: {
          targetNote: {
            type: 'string',
            description: '分析対象のノートパス(vault相対パス)',
          },
        },
        required: ['targetNote'],
      },
    },
  • MCP server request handler (switch case) that receives tool calls for 'analyze_backlinks', extracts arguments, delegates to ObsidianHandler.analyzeBacklinks, and formats response as JSON text content.
    case 'analyze_backlinks':
      const backlinksResult = await this.obsidianHandler.analyzeBacklinks(
        args.targetNote as string
      );
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify(backlinksResult, null, 2),
          },
        ],
      };
  • Input schema definition for the analyze_backlinks tool, specifying required 'targetNote' parameter.
    inputSchema: {
      type: 'object',
      properties: {
        targetNote: {
          type: 'string',
          description: '分析対象のノートパス(vault相対パス)',
        },
      },
      required: ['targetNote'],
    },
  • Helper method used by analyzeBacklinks to recursively find all Markdown files in the Obsidian vault.
    private async getAllMdFiles(dirPath: string): Promise<string[]> {
      const files: string[] = [];
      
      const processDirectory = async (currentPath: string) => {
        try {
          const entries = await fs.readdir(currentPath, { withFileTypes: true });
          
          for (const entry of entries) {
            const fullPath = path.join(currentPath, entry.name);
            
            if (entry.isDirectory()) {
              await processDirectory(fullPath);
            } else if (entry.isFile() && entry.name.endsWith('.md')) {
              files.push(fullPath);
            }
          }
        } catch (error) {
          // ディレクトリ読み込みエラーは無視
        }
      };
      
      await processDirectory(dirPath);
      return files;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions analysis and understanding graph structure, but doesn't disclose behavioral traits like whether it's read-only, if it modifies data, what permissions are needed, or how results are returned. This leaves significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence in Japanese that states the purpose without unnecessary words. It's appropriately sized and front-loaded, though it could be more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It lacks details on what the analysis entails, how results are formatted, or any behavioral context, making it inadequate for a tool that likely returns complex graph data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'targetNote' documented as '分析対象のノートパス(vault相対パス)' (target note path relative to vault). The description doesn't add meaning beyond this, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'バックリンク分析とグラフ構造の把握を行います' (performs backlink analysis and graph structure understanding), which provides a general purpose but lacks specificity about what resources are involved or how it differs from sibling tools like 'find_broken_links' or 'link_notes'. It's not tautological but remains somewhat vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'find_broken_links' or 'link_notes', nor any context about prerequisites or exclusions. The description implies analysis but doesn't specify use cases or constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/libra850/obsidian-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server