Skip to main content
Glama

codebase_context_search

Search across database schemas, API specs, and infrastructure configs using natural language queries. Find relevant domain knowledge and infrastructure information from curated context artifacts automatically indexed for your project.

Instructions

Semantic search across context artifacts (database schemas, API specs, infra configs, etc.) defined in .socraticodecontextartifacts.json. Auto-indexes on first use and auto-detects stale artifacts. Use this to find relevant infrastructure or domain knowledge.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language search query (e.g. 'tables related to billing', 'authentication endpoints', 'deployment resource limits').
projectPathNoAbsolute path to the project directory.
artifactNameNoFilter search to a specific artifact by name (e.g. 'database-schema'). Omit to search across all artifacts.
limitNoMaximum number of results to return. Default: 10.
minScoreNoMinimum RRF score threshold (0-1). Results below this are filtered out. Default: 0.10 (override globally via SEARCH_MIN_SCORE env var). Set to 0 to disable filtering.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses auto-indexing on first use and auto-detection of stale artifacts, which are important behavioral traits. However, it does not explicitly state that the operation is read-only or mention any potential side effects, rate limits, or authentication needs, which would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first describes the core action and resource, the second adds the use case and behavioral trait (auto-indexing). Every sentence is necessary and contributes to understanding, with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 5 parameters, no output schema, and no annotations. The description covers the purpose and key behavioral traits. However, it does not describe the return format (e.g., list of matches with scores), which would be helpful for an agent to interpret results. Given the complexity, it is mostly complete but lacks output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds value by providing example queries in the 'query' parameter description (e.g., 'tables related to billing') and noting default values for 'limit' and 'minScore'. However, it does not add significant meaning beyond the schema for other parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs 'semantic search across context artifacts' and specifies the resource type (database schemas, API specs, etc.). It distinguishes from sibling tools like 'codebase_search' (code search) and 'codebase_context' (likely different action). The verb 'search' is specific to the resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states 'Use this to find relevant infrastructure or domain knowledge,' providing a clear use case. However, it does not specify when not to use it or mention alternatives like other sibling tools, leaving room for ambiguity in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/giancarloerra/SocratiCode'

If you have feedback or need assistance with the MCP directory API, please join our Discord server