Skip to main content
Glama

query_tool

Get answers from indexed documents using natural language queries. Search PDFs, YouTube videos, GitHub repos, and Discord exports with RAG-powered responses that include source citations.

Instructions

Query indexed documents and return an answer with citations.

Searches through all indexed documents (PDF, Discord, etc.) and uses RAG
to provide an answer based on retrieved context, with source citations.

Args:
    query: Natural language question to ask.
    document_id: Optional document ID to filter retrieval (from list_documents).
    page_min: Optional start of page range (inclusive). PDF only.
    page_max: Optional end of page range (inclusive). PDF only.
    tag: Optional tag to filter retrieval (from list_documents).
    document_type: Optional type to filter: "pdf", "youtube", "discord", "github", or "plaintext".
    file_path: Optional file path within a document (GitHub: e.g. src/ria/api/atr.c). Use list_documents to see files.
    response_style: Answer style: "thorough" (detailed) or "concise" (default: "thorough").
    ctx: MCP request context (injected by the server; unused).

Returns:
    Dictionary containing answer and sources (document_id, page).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language question to ask.
document_idNoOptional document ID to filter retrieval (from list_documents).
page_minNoOptional start of page range (inclusive). PDF only.
page_maxNoOptional end of page range (inclusive). PDF only.
tagNoOptional tag to filter retrieval (from list_documents).
document_typeNoOptional type to filter: 'pdf', 'youtube', 'discord', 'github', or 'plaintext'.
file_pathNoOptional file path within a document (GitHub: e.g. src/ria/api/atr.c). Use list_documents to see files.
response_styleNoAnswer style: 'thorough' (detailed) or 'concise'.thorough

Implementation Reference

  • The implementation of the `query_tool`, which handles queries by running `query_index` in a separate thread.
    async def query_tool(
        query: Annotated[str, Field(description="Natural language question to ask.")],
        document_id: Annotated[
            str,
            Field(
                description="Optional document ID to filter retrieval (from list_documents)."
            ),
        ] = "",
        page_min: Annotated[
            int | None,
            Field(description="Optional start of page range (inclusive). PDF only."),
        ] = None,
        page_max: Annotated[
            int | None,
            Field(description="Optional end of page range (inclusive). PDF only."),
        ] = None,
        tag: Annotated[
            str,
            Field(description="Optional tag to filter retrieval (from list_documents)."),
        ] = "",
        document_type: Annotated[
            str,
            Field(
                description="Optional type to filter: 'pdf', 'youtube', 'discord', 'github', or 'plaintext'."
            ),
        ] = "",
        file_path: Annotated[
            str,
            Field(
                description="Optional file path within a document (GitHub: e.g. src/ria/api/atr.c). Use list_documents to see files."
            ),
        ] = "",
        response_style: Annotated[
            str, Field(description="Answer style: 'thorough' (detailed) or 'concise'.")
        ] = "thorough",
        ctx: Context | None = None,
    ) -> dict:
        """Query indexed documents and return an answer with citations.
    
        Searches through all indexed documents (PDF, Discord, etc.) and uses RAG
        to provide an answer based on retrieved context, with source citations.
    
        Args:
            query: Natural language question to ask.
            document_id: Optional document ID to filter retrieval (from list_documents).
            page_min: Optional start of page range (inclusive). PDF only.
            page_max: Optional end of page range (inclusive). PDF only.
            tag: Optional tag to filter retrieval (from list_documents).
            document_type: Optional type to filter: "pdf", "youtube", "discord", "github", or "plaintext".
            file_path: Optional file path within a document (GitHub: e.g. src/ria/api/atr.c). Use list_documents to see files.
            response_style: Answer style: "thorough" (detailed) or "concise" (default: "thorough").
            ctx: MCP request context (injected by the server; unused).
    
        Returns:
            Dictionary containing answer and sources (document_id, page).
    
        """
        style_input = (response_style or "").strip().lower()
        if style_input in ("thorough", "concise"):
            style = style_input
        else:
            style = config.get_response_style()
    
        def _run() -> dict:
            return query_index(
                user_query=query,
                document_id=document_id or None,
                page_min=page_min,
                page_max=page_max,
                tag=tag or None,
                document_type=document_type or None,
                file_path=file_path or None,
                response_style=style,
            )
    
        return await anyio.to_thread.run_sync(_run)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ndjordjevic/pinrag'

If you have feedback or need assistance with the MCP directory API, please join our Discord server