Skip to main content
Glama

kb-mcp-server

by Geeksfino
.env.example1.6 kB
# Configuration method TXTAI_YAML_CONFIG=config.example.yml # Model settings TXTAI_MODEL_PATH=sentence-transformers/all-MiniLM-L6-v2 TXTAI_MODEL_GPU=true TXTAI_MODEL_NORMALIZE=true # Storage settings TXTAI_STORE_CONTENT=true TXTAI_STORAGE_MODE=memory # Storage location (choose one configuration): ## 1. Local Storage (default if no URLs provided) TXTAI_INDEX_PATH=~/.txtai/embeddings ## 2. Remote PostgreSQL + pgvector # TXTAI_CONTENT_URL=postgresql://user:pass@localhost:5432/dbname # TXTAI_CONTENT_SCHEMA=public # TXTAI_VECTOR_BACKEND=pgvector # TXTAI_VECTOR_URL=postgresql://user:pass@localhost:5432/dbname # TXTAI_VECTOR_SCHEMA=public # TXTAI_VECTOR_TABLE=vectors ## 3. Mixed: Remote content + local vectors # TXTAI_CONTENT_URL=postgresql://user:pass@localhost:5432/dbname # TXTAI_VECTOR_BACKEND=faiss # TXTAI_INDEX_PATH=~/.txtai/embeddings # Docker configuration PORT=8000 HOST=0.0.0.0 TRANSPORT=sse CONFIG_FILE=config.yml # Hugging Face models to pre-cache during build # Comma-separated list of model names HF_TRANSFORMERS_MODELS=bert-base-uncased,distilbert-base-uncased HF_SENTENCE_TRANSFORMERS_MODELS=sentence-transformers/all-MiniLM-L6-v2 # Path to Hugging Face cache directory on the host HF_CACHE_DIR=~/.cache/huggingface/hub # Embeddings configuration # Path to embeddings directory or tar.gz file EMBEDDINGS_PATH=/data/embeddings # For mounting a local directory to the container LOCAL_EMBEDDINGS_PATH=./embeddings CONTAINER_EMBEDDINGS_PATH=/data/embeddings # TxtAI dataset configuration TXTAI_DATASET_ENABLED=true TXTAI_DATASET_NAME=web_questions TXTAI_DATASET_SPLIT=train

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Geeksfino/kb-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server