Skip to main content
Glama

query_graph

Search a persistent knowledge graph of prior conversations, decisions, and preferences. Accepts natural-language queries and returns a subgraph of matching nodes and their connections. Supports temporal references like 'recently' and 'last week'. Tune retrieval depth and mode for precise context.

Instructions

Automatically search the memory graph before answering questions that may depend on prior context, user preferences, project decisions, constraints, or earlier conversation state. Returns a serialized subgraph with matching nodes and their connected neighborhood. Understands temporal references such as 'recently', 'latest', 'originally', and 'last week'.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesNatural-language search query.
max_nodesNoMaximum number of matching nodes to return.
max_depthNoRelationship traversal depth around matching nodes.
expand_depthNoOptional support expansion depth. At 1, graph mode may return up to twice max_nodes.
agent_idNoOptional agent or client identifier used to partition memory.
projectNoOptional project or workspace name used to partition memory.
session_idNoOptional conversation or run identifier used to partition memory.
retrieval_modeNoRetrieval strategy: graph-only, transcript replay, or fused graph plus replay results.graph
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full burden. It discloses not only the return type (serialized subgraph) but also temporal understanding. However, it omits details about non-match behavior, side effects, or performance implications. It implies a read operation but doesn't explicitly state read-only nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences. The first sentence immediately conveys the purpose and use case, while the second adds key behavioral details. No extraneous words or information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters and no output schema, the description is minimal. It provides the core purpose and some behavioral insight but lacks details on return structure, error handling, or performance characteristics. It suffices for a straightforward search tool but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds the context of temporal reference understanding, which enriches the query parameter but doesn't add significant new meaning beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to search the memory graph for context relevant to answering questions. It specifies that it returns a serialized subgraph with matching nodes and connected neighborhood, which is a specific verb+resource combination. This distinguishes it from sibling tools like store_node or get_related by its retrieval behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context on when to use the tool (before answering questions depending on prior context) and lists examples of what to search for. However, it does not explicitly state when not to use it or compare it to alternative tools like debug_retrieval or get_related, which are present as siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Abhigyan-Shekhar/Waggle-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server