Skip to main content
Glama

search

Search a codebase using natural language, code, or exact symbols. Choose hybrid, BM25, or semantic mode, filter by language or path, and get ranked results with context.

Instructions

Search a codebase with a natural-language, code, or exact-symbol query.

Use hybrid by default, bm25 for exact identifiers and literals, and semantic for conceptual behavior. Optional language and filter_paths filters narrow the index when the agent already knows where to look. Use source for local paths or Git URLs and limit for result bounds. Results include formatted text for context injection and structured fields.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
alphaNoOptional hybrid semantic weight. Omit to let SIFS choose from query shape.
filter_languagesNoOptional exact language labels to search, such as rust or typescript.
filter_pathsNoOptional repository-relative file paths to search.
limitNoMaximum number of ranked chunks to return.
modeNoUse hybrid by default, bm25 for exact symbols/literals, and semantic for conceptual queries.hybrid
profileNoSaved profile to use for source and search defaults.
queryYesNatural language or code query.
sourceNoGit URL or local path to index and search.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral traits fully. It mentions results include formatted text and structured fields, but does not disclose side effects (e.g., read-only nature), auth requirements, rate limits, or any destructive behavior. This leaves gaps in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is five sentences, each adding distinct value. The first sentence immediately states the core purpose. Subsequent sentences cover modes, filters, source/limit, and results. No redundant or vague language, earning a top score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters, no output schema, and moderate complexity, the description adequately covers what the tool does, how to configure it, and the nature of results. It lacks explicit error handling or performance notes, but the essential context for an agent to correctly invoke the tool is present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by grouping parameters (mode, filters, source, limit) and explaining their usage context (e.g., 'Use `hybrid` by default'). It does not repeat schema details but provides higher-level guidance, justifying a score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches a codebase with natural-language, code, or exact-symbol queries. It distinguishes itself from siblings like get_chunk or find_related by being the general search entry point, which is evident from the detailed mode explanations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises when to use each mode: hybrid by default, bm25 for exact identifiers/literals, semantic for conceptual behavior. It also mentions optional filters for narrowing when agent knows the location. However, it lacks direct comparison to sibling tools (e.g., find_related) for when to use this overall tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tristanmanchester/sifs'

If you have feedback or need assistance with the MCP directory API, please join our Discord server