Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
DB_PATHNoVector database storage location. Can grow large with many documents../lancedb/
BASE_DIRNoDocument root directory. Server only accesses files within this path (prevents accidental system file access)..
CACHE_DIRNoModel cache directory. After first download, model stays here for offline use../models/
CHUNK_SIZENoCharacters per chunk. Larger = more context but slower processing. Valid range: 128 - 2048.512
MODEL_NAMENoHuggingFace model identifier. Must be Transformers.js compatible.Xenova/all-MiniLM-L6-v2
CHUNK_OVERLAPNoOverlap between chunks. Preserves context across boundaries. Valid range: 0 - (CHUNK_SIZE/2).100
MAX_FILE_SIZENoMaximum file size in bytes. Larger files rejected to prevent memory issues. Valid range: 1MB - 500MB.104857600

Tools

Functions exposed to the LLM to take actions

NameDescription
query_documents

Search ingested documents. Your query words are matched exactly (keyword search). Your query meaning is matched semantically (vector search). Preserve specific terms from the user. Add context if the query is ambiguous. Results include score (0 = most relevant, higher = less relevant).

ingest_file

Ingest a document file (PDF, DOCX, TXT, MD) into the vector database for semantic search. File path must be an absolute path. Supports re-ingestion to update existing documents.

ingest_data

Ingest content as a string, not from a file. Use for: fetched web pages (format: html), copied text (format: text), or markdown strings (format: markdown). The source identifier enables re-ingestion to update existing content. For files on disk, use ingest_file instead.

delete_file

Delete a previously ingested file or data from the vector database. Use filePath for files ingested via ingest_file, or source for data ingested via ingest_data. Either filePath or source must be provided.

list_files

List all ingested files in the vector database. Returns file paths and chunk counts for each document.

status

Get system status including total documents, total chunks, database size, and configuration information.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/shinpr/mcp-local-rag'

If you have feedback or need assistance with the MCP directory API, please join our Discord server