Skip to main content
Glama

search_symbols

Search for symbols using an index database, automatically falling back to graph search if the index is unavailable.

Instructions

Search symbols via index DB if available, else graph fallback.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
limitNo
context_unit_idNo

Implementation Reference

  • Core handler method 'search_symbols' on ArchitectureContextService. Uses index DB (QueryPlanner.lookup_symbol + search_prefix) if available, otherwise falls back to a graph-based substring scan of unit IDs. Results are cached via _cached().
    def search_symbols(self, query: str, limit: int = 10, context_unit_id: str | None = None) -> dict[str, Any]:
        """Search symbols using index DB if available, otherwise graph fallback."""
        max_limit = max(1, limit)
        key = ("search_symbols", query, max_limit, context_unit_id or "")
    
        def _build() -> dict[str, Any]:
            planner = self._get_planner()
            if planner is not None:
                results = []
                seen = set()
                for r in planner.lookup_symbol(query, context_unit_id=context_unit_id, max_results=max_limit):
                    if r.unit_id in seen:
                        continue
                    seen.add(r.unit_id)
                    results.append(
                        {
                            "unit_id": r.unit_id,
                            "name": r.name,
                            "file_path": r.file_path,
                            "score": round(r.score, 4),
                            "reasoning": r.reasoning,
                            "is_exported": bool(r.is_exported),
                        }
                    )
                if len(results) < max_limit:
                    for r in planner.search_prefix(query, context_unit_id=context_unit_id, max_results=max_limit):
                        if r.unit_id in seen:
                            continue
                        seen.add(r.unit_id)
                        results.append(
                            {
                                "unit_id": r.unit_id,
                                "name": r.name,
                                "file_path": r.file_path,
                                "score": round(r.score, 4),
                                "reasoning": r.reasoning,
                                "is_exported": bool(r.is_exported),
                            }
                        )
                        if len(results) >= max_limit:
                            break
                return {"query": query, "source": "index_db", "count": len(results), "results": results}
    
            qlow = query.lower()
            matches = []
            for uid in self.unit_by_id:
                symbol_name = uid.split("::")[-1]
                if qlow in symbol_name.lower() or qlow in uid.lower():
                    matches.append(
                        {
                            "unit_id": uid,
                            "name": symbol_name,
                            "file_path": uid.split("::", 1)[0],
                            "score": 0.5,
                            "reasoning": "fallback graph symbol scan",
                            "is_exported": False,
                        }
                    )
            matches.sort(key=lambda x: x["name"])
            return {"query": query, "source": "graph_fallback", "count": len(matches[:max_limit]), "results": matches[:max_limit]}
    
        return self._cached(key, _build)
  • MCP tool registration of 'search_symbols' via @mcp.tool() decorator. Defines the public API surface (query, limit, context_unit_id) and delegates to service.search_symbols().
    @mcp.tool()
    def search_symbols(query: str, limit: int = 8, context_unit_id: str = ""):
        """Search for symbols (functions, classes, methods) by name or description.
    
        Use this to locate a specific function or class before navigating to it or
        understanding its context. Requires an index DB for ranked semantic results;
        falls back to graph name-matching when no DB is present. Prefer this over
        grep when you want architecture-aware ranking. Do NOT use for cluster lookup —
        use cluster_of_file instead.
    
        Args:
            query: Symbol name fragment or natural-language description (e.g. "parse headers").
            limit: Maximum number of results to return (default 8).
            context_unit_id: Optional unit ID to bias results toward a specific module scope.
    
        Returns:
            A ranked list of symbol dicts with name, file, cluster, and relevance score.
        """
        ctx = context_unit_id or None
        return service.search_symbols(query=query, limit=limit, context_unit_id=ctx)
  • Helper _get_planner() lazy-loads QueryPlanner from index DB path. Used by search_symbols to perform semantic lookups.
    def _get_planner(self):
        if self.index_db_path is None or not self.index_db_path.exists():
            return None
        if self._planner is None:
            from bgi.indexer.planner import QueryPlanner
    
            self._planner = QueryPlanner(str(self.index_db_path))
        return self._planner
  • Type annotations and docstring defining the input schema for search_symbols: query (str), limit (int, default 8), context_unit_id (str, default ''). Returns a ranked list of symbol dicts.
    Args:
        query: Symbol name fragment or natural-language description (e.g. "parse headers").
        limit: Maximum number of results to return (default 8).
        context_unit_id: Optional unit ID to bias results toward a specific module scope.
    
    Returns:
        A ranked list of symbol dicts with name, file, cluster, and relevance score.
    """
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool tries an index database first and falls back to a graph if unavailable, which adds behavioral context beyond the empty annotations. However, it does not specify what happens if both fail, potential side effects, or auth requirements. For a tool with no annotations, this is moderate but incomplete.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (8 words) and front-loaded with the core action. However, it is overly minimal given the tool has three parameters, and some additional detail would improve usability without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has three parameters, no output schema, and no annotations, the description lacks completeness. It does not explain return values, parameter usage, error handling, or the nature of 'symbols'. A search tool typically needs more context to be usable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% and the description provides no explanation of the three parameters (query, limit, context_unit_id). It adds no meaning beyond the raw schema, leaving the agent to infer roles from names alone. This is insufficient for proper tool invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Search symbols' which is a specific verb-resource pair, clearly indicating the tool's function. It distinguishes from sibling tools as none of them are symbol-search related. However, it lacks specificity on what type of symbols (e.g., code symbols, financial symbols) and does not elaborate on the search behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives. It mentions internal fallback logic but does not provide context such as prerequisites, typical use cases, or situations where another tool might be preferred. Sibling tools are not similar, but the description does not address this.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ahmedxuhri/bigindexer'

If you have feedback or need assistance with the MCP directory API, please join our Discord server