Skip to main content
Glama

unified_search

Search across conversations, GitHub, and markdown to retrieve integrated timelines of thinking on any topic.

Instructions

    Search across ALL sources: conversations, GitHub, markdown.
    Returns integrated timeline of thinking on a topic.
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
limitNo

Implementation Reference

  • The 'unified_search' tool implementation, which queries conversations (vector), GitHub commits (SQL/keyword), and Markdown documentation (SQL/keyword), then aggregates and returns the results.
    def unified_search(query: str, limit: int = 15) -> str:
        """
        Search across ALL sources: conversations, GitHub, markdown.
        Returns integrated timeline of thinking on a topic.
        """
        cfg = get_config()
        results = []
    
        # 1. Conversation embeddings (semantic)
        try:
            embedding = get_embedding(query)
            if embedding and cfg.lance_path.exists():
                lance_results = lance_search(embedding, limit=5)
                for title, content, year, month, sim in lance_results:
                    date = f"{year}-{month:02d}"
                    results.append(("conversation", title or "Untitled", content, date, sim))
        except Exception:
            pass
    
        # 2. GitHub commits (keyword)
        try:
            gh_db = get_github_db()
            if gh_db and cfg.github_commits_parquet.exists():
                gh_results = gh_db.execute("""
                    SELECT 'github' as source,
                           repo_name || ': ' || LEFT(message, 80) as title,
                           message as content,
                           CAST(timestamp AS VARCHAR) as date,
                           0.4 as score
                    FROM github_commits
                    WHERE message ILIKE ? OR repo_name ILIKE ?
                    ORDER BY timestamp DESC
                    LIMIT 3
                """, [f"%{query}%", f"%{query}%"]).fetchall()
                results.extend(gh_results)
        except Exception:
            pass
    
        # 3. Markdown docs (keyword)
        try:
            md_db = get_markdown_db()
            if md_db:
                md_results = md_db.execute("""
                    SELECT 'markdown' as source,
                           COALESCE(title, filename) as title,
                           LEFT(content, 500) as content,
                           CAST(modified_at AS VARCHAR) as date,
                           0.45 as score
                    FROM markdown_docs
                    WHERE content ILIKE ? OR title ILIKE ? OR filename ILIKE ?
                    ORDER BY depth_score DESC
                    LIMIT 3
                """, [f"%{query}%", f"%{query}%", f"%{query}%"]).fetchall()
                results.extend(md_results)
        except Exception:
            pass
    
        if not results:
            return f"No results found across any source for: {query}"
    
        results.sort(key=lambda x: -x[4])
    
        output = [f"## Unified Search: \"{query}\"\n"]
        output.append(f"Found {len(results)} results across sources:\n")
    
        current_source = None
        for source, title, content, date, _ in results[:limit]:
            if source != current_source:
                output.append(f"\n### {source.upper()}")
                current_source = source
    
            date_str = date[:10] if date else "unknown"
            preview = (content[:150] + "...") if content and len(content) > 150 else (content or "")
            output.append(f"**[{date_str}]** {title}")
            if preview:
                output.append(f"> {preview}")
            output.append("")
    
        return "\n".join(output)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mordechaipotash/brain-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server