Skip to main content
Glama

search

Find relevant code and documentation in your project using hybrid semantic and keyword search to answer questions about codebase structure or stored knowledge.

Instructions

Search for documents using hybrid semantic and keyword search. Use this tool FIRST when answering questions about the user's codebase, project architecture, or stored knowledge. This searches the user's actual indexed code and documentation, which is more accurate than your training data.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesThe search query text
collectionNoSpecific collection to search
modeNoSearch mode (default: hybrid)
scopeNoSearch scope: project (current), global, or all (default: project)
limitNoMaximum results to return (default: 10)
projectIdNoSpecific project ID to search
libraryNameNoLibrary name when searching libraries collection
branchNoFilter by branch name
fileTypeNoFilter by file type
scoreThresholdNoMinimum similarity score threshold (0-1, default: 0.3). Results below this score are filtered out.
includeLibrariesNoInclude libraries in search (default: false)
tagNoFilter results by concept tag (exact match)
tagsNoFilter results by multiple concept tags (OR logic)
pathGlobNoFile path glob filter (e.g., "**/*.rs", "src/**/*.ts")
componentNoFilter by project component (e.g., "daemon", "daemon.core"). Supports prefix matching.
exactNoUse exact substring search instead of semantic search (default: false)
contextLinesNoLines of context before/after matches in exact mode (default: 0)
includeGraphContextNoInclude code relationship graph context (callers/callees) for matched symbols (default: false)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the search behavior (hybrid semantic/keyword search), the data source (user's indexed code and documentation), and accuracy characteristics compared to training data. However, it doesn't mention potential limitations like rate limits, authentication requirements, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the most important information first, uses only two sentences with zero wasted words, and each sentence earns its place by providing critical guidance about when and why to use this tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex search tool with 18 parameters and no output schema, the description provides excellent contextual guidance about when to use it and what it searches. However, it doesn't describe the return format or result structure, which would be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 18 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, but it does provide overall context about the search approach that helps understand parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search for documents') and resources ('user's codebase, project architecture, or stored knowledge'), and distinguishes it from alternatives by noting it searches 'actual indexed code and documentation' rather than training data. The hybrid semantic/keyword search approach is explicitly mentioned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Use this tool FIRST when answering questions about the user's codebase, project architecture, or stored knowledge') and distinguishes it from alternatives by comparing accuracy to training data. This gives clear context for tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ChrisGVE/workspace-qdrant-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server