Skip to main content
Glama

Files-DB-MCP

by randomm
docker-compose.yml2.24 kB
services: vector-db: image: qdrant/qdrant:latest volumes: - qdrant_data:/qdrant/storage # Disable health check for vector-db as it's a third-party image that doesn't include curl or wget # Instead, we'll validate connection to vector-db from the files-db-mcp service ports: - "6333:6333" # For internal communication files-db-mcp: build: context: . dockerfile: ./Dockerfile depends_on: vector-db: condition: service_started volumes: - ${PROJECT_DIR:-./}:/project:ro # Mount the project directory read-only - app_data:/app/data # For persistent application data - model_cache:/root/.cache/huggingface/hub # Persist model cache between container restarts environment: - VECTOR_DB_HOST=vector-db - VECTOR_DB_PORT=6333 - EMBEDDING_MODEL=${EMBEDDING_MODEL:-sentence-transformers/all-MiniLM-L6-v2} # Default code embedding model # For faster startup, you can set EMBEDDING_MODEL to a smaller model: # Example: export EMBEDDING_MODEL=sentence-transformers/all-MiniLM-L6-v2 - FAST_STARTUP=${FAST_STARTUP:-false} # Set to 'true' to prioritize faster startup over quality - QUANTIZATION=true # Enable quantization by default - BINARY_EMBEDDINGS=false # Disable binary embeddings by default - DEBUG=true # Enable debug mode - PROJECT_PATH=/project - IGNORE_PATTERNS=.git,node_modules,__pycache__,venv,dist,build,*.pyc,.files-db-mcp - PORT=8000 # Explicitly set the port ports: - "3000:8000" # Map container port 8000 to host port 3000 healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8000/health"] interval: 30s timeout: 30s retries: 10 start_period: 600s # Give much more time for large model downloads volumes: qdrant_data: name: files-db-mcp-qdrant-data app_data: name: files-db-mcp-app-data model_cache: name: files-db-mcp-model-cache # Persistent storage for downloaded models with explicit naming

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/randomm/files-db-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server