Skip to main content
Glama

search

Search Claude Code session history using natural language queries to find specific events, discussions, or file interactions with optional filtering by session, project, or event type.

Instructions

Semantic search across all stored Claude Code session events. Returns events matching a natural language query, with optional filters by event type, session, project, tool, or file path. IMPORTANT: Always pass session_id when you know which session to search — unscoped search returns noisy results. For finding a discussion WITH surrounding conversation context, use search_in_context instead.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language query
limitNoMax results (default 10)
session_idNoScope search to a single session (prefix match)
project_idNoScope search to a project by project_id
project_nameNoScope search to a project by name substring (e.g. 'gonzo')
event_typeNoFilter: user_message, assistant_text, assistant_thinking, tool_call, tool_result
tool_nameNoFilter by tool name (Edit, Bash, Read, etc.)
file_path_containsNoFilter to events with an explicit file_path containing this string (tool_call/tool_result events only — user messages won't have file_path metadata)
max_charsNoMax total output characters (default 12000). Set higher if you need full content.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing important behavioral traits: it warns about 'noisy results' from unscoped searches, explains that file_path filtering only works for tool_call/tool_result events (not user messages), and mentions output truncation via max_chars parameter. It doesn't cover rate limits, authentication needs, or pagination behavior, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first states purpose and scope, second provides critical usage guidance, third names alternative tool. Every sentence earns its place by adding essential information not found elsewhere. The structure is front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 9 parameters, 100% schema coverage, but no annotations or output schema, the description provides strong contextual completeness. It covers purpose, usage guidelines, behavioral warnings, and sibling differentiation. The main gap is lack of information about return format or result structure, which would be helpful given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description adds some semantic context about session_id being 'important' for scoping and file_path filtering limitations, but doesn't provide significant additional parameter meaning beyond what's in the schema. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'semantic search across all stored Claude Code session events' and specifies it 'returns events matching a natural language query' with optional filters. It explicitly distinguishes from sibling 'search_in_context' by noting this tool returns events while the alternative finds discussions with surrounding conversation context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: 'Always pass session_id when you know which session to search — unscoped search returns noisy results' and 'For finding a discussion WITH surrounding conversation context, use search_in_context instead.' It gives clear when-to-use and when-not-to-use instructions with named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Wynelson94/longhand'

If you have feedback or need assistance with the MCP directory API, please join our Discord server