Skip to main content
Glama

rag_query

Query documents with context using a Retrieval-Augmented Generation (RAG) system. Automatically creates an index if it does not exist, enabling quick access to relevant information from stored repositories and text files.

Instructions

Query a document using RAG. Note: If the index does not exist, it will be created when you query, which may take some time.

Input Schema

NameRequiredDescriptionDefault
document_idYesID of the document to query
queryYesQuery to run against the document

Input Schema (JSON Schema)

{ "properties": { "document_id": { "description": "ID of the document to query", "type": "string" }, "query": { "description": "Query to run against the document", "type": "string" } }, "required": [ "document_id", "query" ], "type": "object" }

Implementation Reference

  • Main handler for 'rag_query' tool call. Validates inputs, checks document existence, loads/indexes document, sets up Gemini LLM and query engine using LlamaIndex, executes query, handles errors.
    case "rag_query": { const documentId = String(request.params.arguments?.document_id); const query = String(request.params.arguments?.query); if (!documentId || !query) { throw new Error("Document ID and query are required"); } try { // ドキュメントが存在するか確認し、存在しなければ自動的に作成を試みる let documents = await listDocuments(); let document = documents.find(c => c.id === documentId); if (!document) { return { content: [{ type: "text", text: `Document '${documentId}' not found. Please add it manually using add_git_repository or add_text_file tools.` }] }; } // Load and index document if needed const index = await loadDocument(documentId); // 一時的にGemini LLMを設定 const originalLLM = Settings.llm; const gemini = new Gemini({ model: GEMINI_MODEL.GEMINI_2_0_FLASH }); // グローバル設定に設定 Settings.llm = gemini; // クエリエンジンの作成 const queryEngine = index.asQueryEngine(); // クエリの実行 const response = await queryEngine.query({ query }); return { content: [{ type: "text", text: response.toString() }] }; } catch (error: any) { console.error(`Error in rag_query:`, error.message); return { content: [{ type: "text", text: `Error processing query: ${error.message}` }] }; } }
  • src/index.ts:371-388 (registration)
    Registration of 'rag_query' tool in ListToolsRequestSchema handler, including name, description, and input schema definition.
    { name: "rag_query", description: "Query a document using RAG. Note: If the index does not exist, it will be created when you query, which may take some time.", inputSchema: { type: "object", properties: { document_id: { type: "string", description: "ID of the document to query" }, query: { type: "string", description: "Query to run against the document" } }, required: ["document_id", "query"] } },
  • Input schema for rag_query tool defining required document_id (string) and query (string) parameters.
    inputSchema: { type: "object", properties: { document_id: { type: "string", description: "ID of the document to query" }, query: { type: "string", description: "Query to run against the document" } }, required: ["document_id", "query"] }
  • Key helper function called by rag_query handler. Loads document files recursively, creates and persists VectorStoreIndex using LlamaIndex with Gemini embedding/LLM, caches in global indices.
    export async function loadDocument(documentId: string): Promise<VectorStoreIndex> { if (indices[documentId]?.index) { return indices[documentId].index; } let documents = await listDocuments(); let document = documents.find(c => c.id === documentId); // ドキュメントが存在しない場合はエラーをスロー if (!document) { throw new Error(`Document not found: ${documentId}`); } let documentItems: Document[] = []; if (fs.statSync(document.path).isDirectory()) { // ディレクトリを再帰的に処理 documentItems = await readDirectoryRecursively(document.path); // 空のドキュメントリストの場合にフォールバックメッセージを追加 if (documentItems.length === 0) { console.warn(`No documents found in document: ${documentId}`); documentItems.push(new Document({ text: `This document (${document.name}) appears to be empty. Please check if files exist at path: ${document.path}`, metadata: { name: 'empty-notice', source: document.path } })); } } else { // Process single file const text = fs.readFileSync(document.path, 'utf-8'); documentItems = [new Document({ text, metadata: { name: document.id, source: document.path } })]; } // Gemini埋め込みモデルを設定 const geminiEmbed = new GeminiEmbedding(); // Gemini LLMモデルを設定 const gemini = new Gemini({ model: GEMINI_MODEL.GEMINI_2_0_FLASH }); // グローバル設定に埋め込みモデルとLLMを設定(これをグローバルに保持) Settings.embedModel = geminiEmbed; Settings.llm = gemini; // Create storage context const storageContext = await storageContextFromDefaults({ persistDir: path.join(DOCS_PATH, '.indices', documentId), }); // Create index const index = await VectorStoreIndex.fromDocuments(documentItems, { storageContext, }); // Save index for future use indices[documentId] = { index, description: document.description }; return index; }

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kazuph/mcp-docs-rag'

If you have feedback or need assistance with the MCP directory API, please join our Discord server