Skip to main content
Glama

MCP Memory LibSQL Go

by ZanzyTHEbar

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
BM25_BNoBM25 length normalization parameter. Example 0.75
BM25_K1NoBM25 saturation parameter. Example 1.2
LIBSQL_URLNoDatabase URL (local file: file:./path/to/db.sqlite, remote libSQL: libsql://your-db.turso.io)file:./libsql.db
BM25_ENABLENoSet to false or 0 to disable BM25 orderingtrue
OLLAMA_HOSTNoOllama host. Example http://localhost:11434
HYBRID_RRF_KNoRRF K parameter for hybrid search60
METRICS_PORTNoMetrics HTTP port exposing /metrics and /healthz9090
PROJECTS_DIRNoBase directory for multi-project mode (can also be set via flag -projects-dir)
HYBRID_SEARCHNoSet to true/1 to enable hybrid search
EMBEDDING_DIMSNoEmbedding dimension for new databases. Existing DBs are auto-detected and take precedence at runtime.4
GOOGLE_API_KEYNoGoogle API key for Gemini
OPENAI_API_KEYNoOpenAI API key
VOYAGE_API_KEYNoVoyageAI API key (alternative)
LOCALAI_API_KEYNoOptional LocalAI API key
LOCALAI_BASE_URLNoLocalAI base URL (OpenAI-compatible)http://localhost:8080/v1
VOYAGEAI_API_KEYNoVoyageAI API key
DB_MAX_IDLE_CONNSNoMax idle DB connections (optional)
DB_MAX_OPEN_CONNSNoMax open DB connections (optional)
LIBSQL_AUTH_TOKENNoAuthentication token for remote databases
HYBRID_TEXT_WEIGHTNoText weight for hybrid search0.4
METRICS_PROMETHEUSNoIf set (e.g., true), expose Prometheus metrics
EMBEDDINGS_PROVIDERNoOptional embeddings source. Supported values: openai, ollama, gemini|google|google-gemini|google_genai, vertexai|vertex|google-vertex, localai|llamacpp|llama.cpp, voyageai|voyage|voyage-ai
OLLAMA_HTTP_TIMEOUTNoOllama HTTP timeout to allow cold model load for larger models. Example 60s
VERTEX_ACCESS_TOKENNoVertex AI Bearer token
DB_CONN_MAX_IDLE_SECNoConnection max idle time in seconds (optional)
HYBRID_VECTOR_WEIGHTNoVector weight for hybrid search0.6
EMBEDDINGS_ADAPT_MODENoHow to adapt provider vectors to the DB size: pad_or_truncate | pad | truncatepad_or_truncate
GEMINI_EMBEDDINGS_MODELNoGemini embeddings model (default text-embedding-004, dims 768)text-embedding-004
OLLAMA_EMBEDDINGS_MODELNoOllama embeddings model (default nomic-embed-text, dims 768)nomic-embed-text
OPENAI_EMBEDDINGS_MODELNoOpenAI embeddings modeltext-embedding-3-small
DB_CONN_MAX_LIFETIME_SECNoConnection max lifetime in seconds (optional)
LOCALAI_EMBEDDINGS_MODELNoLocalAI embeddings model (default text-embedding-ada-002, dims 1536)text-embedding-ada-002
VOYAGEAI_EMBEDDINGS_DIMSNoOptional VoyageAI embeddings dimensions to explicitly set expected output length
VOYAGEAI_EMBEDDINGS_MODELNoVoyageAI embeddings modelvoyage-3-lite
VERTEX_EMBEDDINGS_ENDPOINTNoVertex AI embeddings endpoint. Format: https://{location}-aiplatform.googleapis.com/v1/projects/{project}/locations/{location}/publishers/google/models/{model}:predict
MULTI_PROJECT_AUTH_REQUIREDNoSet to false/0 to disable per-project auth enforcementtrue
MULTI_PROJECT_DEFAULT_TOKENNoOptional token value used when auto-initializing; if omitted, a random token is generated
MULTI_PROJECT_AUTO_INIT_TOKENNoSet to true/1 to auto-create a token file on first access when none existsfalse

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ZanzyTHEbar/mcp-memory-libsql-go'

If you have feedback or need assistance with the MCP directory API, please join our Discord server