Skip to main content
Glama

get_link_graph

Generate a visual link graph showing connections between notes in your Obsidian vault to analyze relationships and discover content connections.

Instructions

Get the link graph for the vault

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
max_notesNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Core implementation of get_link_graph: iterates over notes, extracts wikilinks, resolves them to paths, builds nodes and directed edges for graph visualization.
    async def get_link_graph(self, max_notes: int = 1000) -> dict[str, Any]:
        """
        Build a link graph for the vault.
    
        Returns:
            Dict with 'nodes' and 'edges' for visualization
        """
        nodes = []
        edges = []
        seen_paths = set()
    
        for note_meta in self.list_notes(limit=max_notes):
            # Add node
            if note_meta.path not in seen_paths:
                nodes.append(
                    {
                        "id": note_meta.path,
                        "name": note_meta.name,
                        "size": note_meta.size,
                        "tags": note_meta.tags if note_meta.tags else [],
                    }
                )
                seen_paths.add(note_meta.path)
    
            # Add edges (links)
            try:
                note = await self.read_note(note_meta.path)
                links = self._extract_links(note.content)
    
                for link in links:
                    resolved = self._resolve_link(link, note_meta.path)
                    if resolved and resolved in seen_paths:
                        edges.append(
                            {
                                "source": note_meta.path,
                                "target": resolved,
                            }
                        )
            except Exception as e:
                logger.debug(f"Error building graph for {note_meta.path}: {e}")
                continue
    
        return {
            "nodes": nodes,
            "edges": edges,
            "total_nodes": len(nodes),
            "total_edges": len(edges),
        }
  • MCP tool registration for get_link_graph: wrapper that calls vault.get_link_graph, formats summary and full JSON output.
    @mcp.tool(name="get_link_graph", description="Get the link graph for the vault")
    async def get_link_graph(max_notes: int = 500) -> str:
        """
        Build a link graph of the vault.
    
        Args:
            max_notes: Maximum number of notes to include (default: 500)
    
        Returns:
            JSON representation of the graph with nodes and edges
        """
        if max_notes <= 0 or max_notes > 10000:
            return "Error: max_notes must be between 1 and 10000"
    
        context = _get_context()
    
        try:
            graph = await context.vault.get_link_graph(max_notes)
    
            output = "# Link Graph\n\n"
            output += f"**Total Nodes:** {graph['total_nodes']}\n"
            output += f"**Total Edges:** {graph['total_edges']}\n\n"
    
            output += "## Sample Nodes (first 10):\n"
            for node in graph["nodes"][:10]:
                output += f"- {node['name']} ({node['id']})\n"
    
            output += "\n## Sample Edges (first 10):\n"
            for edge in graph["edges"][:10]:
                output += f"- {edge['source']} → {edge['target']}\n"
    
            output += "\n\n**Full Graph Data (JSON):**\n```json\n"
            output += json.dumps(graph, indent=2)
            output += "\n```"
    
            return output
    
        except Exception as e:
            logger.exception("Error building link graph")
            return f"Error building link graph: {e}"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action ('Get') but lacks details on what the link graph includes (e.g., nodes, edges, format), whether it's read-only or has side effects, performance considerations, or error handling. This leaves significant gaps for a tool that likely returns complex data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. It's front-loaded with the core purpose, making it easy to parse quickly. Every part of the sentence contributes directly to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's likely complexity (returning a link graph) and the presence of an output schema, the description is minimally adequate. It states what the tool does but lacks context on the graph's structure or use cases. The output schema should cover return values, but more behavioral context would help the agent use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description doesn't mention any parameters, which is acceptable since there's only one optional parameter ('max_notes') with a default value. With 0% schema description coverage, the description doesn't need to compensate, as the parameter is straightforward and optional. The baseline for 0 parameters is 4, reflecting that no parameter explanation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Get[s] the link graph for the vault', which provides a clear verb ('Get') and resource ('link graph for the vault'). However, it doesn't differentiate from sibling tools like 'get_backlinks' or 'get_outgoing_links', leaving ambiguity about what distinguishes this graph from other link-related operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, context for usage, or comparisons to sibling tools like 'get_backlinks' or 'get_outgoing_links', leaving the agent to infer usage scenarios independently.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/getglad/obsidian_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server