Skip to main content
Glama
orneryd

M.I.M.I.R - Multi-agent Intelligent Memory & Insight Repository

by orneryd
.env.arm643.21 kB
# ============================================================================= # Mimir ARM64 Configuration # ============================================================================= # This .env file is configured for docker-compose.arm64.yml # Uses: copilot-api, llama-vl-server (Qwen2.5-VL), no external Neo4j/NornicDB # ============================================================================= # --- Neo4j/NornicDB Graph Database --- # Connects to host machine's Neo4j instance NEO4J_URI=bolt://localhost:7687 NEO4J_USER=admin NEO4J_PASSWORD=password # --- Server Configuration --- NODE_ENV=production PORT=3000 # --- Workspace Configuration --- WORKSPACE_ROOT=/workspace HOST_WORKSPACE_ROOT=~/src # --- LLM API Configuration (Copilot API) --- MIMIR_DEFAULT_PROVIDER=copilot MIMIR_LLM_API=http://localhost:4141 MIMIR_LLM_API_PATH=/v1/chat/completions MIMIR_LLM_API_MODELS_PATH=/v1/models MIMIR_LLM_API_KEY=dummy-key # --- Model Configuration --- MIMIR_DEFAULT_MODEL=gpt-4.1 MIMIR_DEFAULT_CONTEXT_WINDOW=128000 # Per-Agent Model Configuration (optional overrides - leave empty for defaults) MIMIR_PM_MODEL= MIMIR_WORKER_MODEL= MIMIR_QC_MODEL= # --- Embeddings Configuration --- MIMIR_EMBEDDINGS_ENABLED=false MIMIR_EMBEDDINGS_PROVIDER=openai MIMIR_EMBEDDINGS_API=http://llama-server:8080 MIMIR_EMBEDDINGS_API_PATH=/v1/embeddings MIMIR_EMBEDDINGS_API_MODELS_PATH=/v1/models MIMIR_EMBEDDINGS_API_KEY=dummy-key MIMIR_EMBEDDINGS_MODEL=mxbai-embed-large MIMIR_EMBEDDINGS_DIMENSIONS=1024 MIMIR_EMBEDDINGS_CHUNK_SIZE=768 MIMIR_EMBEDDINGS_CHUNK_OVERLAP=10 MIMIR_EMBEDDINGS_DELAY_MS=100 MIMIR_EMBEDDINGS_MAX_RETRIES=3 # --- Image/VL Embeddings Configuration (Qwen2.5-VL) --- MIMIR_EMBEDDINGS_IMAGES=false MIMIR_EMBEDDINGS_IMAGES_DESCRIBE_MODE=true MIMIR_EMBEDDINGS_VL_PROVIDER=llama.cpp MIMIR_EMBEDDINGS_VL_API=http://llama-vl-server:8080 MIMIR_EMBEDDINGS_VL_API_PATH=/v1/chat/completions MIMIR_EMBEDDINGS_VL_API_KEY=dummy-key MIMIR_EMBEDDINGS_VL_MODEL=qwen2.5-vl MIMIR_EMBEDDINGS_VL_CONTEXT_SIZE=131072 MIMIR_EMBEDDINGS_VL_MAX_TOKENS=2048 MIMIR_EMBEDDINGS_VL_TEMPERATURE=0.7 MIMIR_EMBEDDINGS_VL_DIMENSIONS=768 MIMIR_EMBEDDINGS_VL_TIMEOUT=180000 # --- Indexing Configuration --- MIMIR_INDEXING_THREADS=1 # --- Feature Flags --- MIMIR_FEATURE_PM_MODEL_SUGGESTIONS=true MIMIR_AUTO_INDEX_DOCS=true # --- Agent Execution Limits --- MIMIR_AGENT_RECURSION_LIMIT=100 # --- Security Configuration --- MIMIR_ENABLE_SECURITY=false # Uncomment and set these for dev users: # MIMIR_DEV_USER_ADMIN=admin:admin:admin # MIMIR_DEV_USER_DEVELOPER= # MIMIR_DEV_USER_ANALYST= # MIMIR_DEV_USER_VIEWER= # MIMIR_JWT_SECRET= # --- OAuth Configuration (uncomment if using OAuth) --- # MIMIR_AUTH_PROVIDER=oauth # MIMIR_OAUTH_AUTHORIZATION_URL= # MIMIR_OAUTH_TOKEN_URL= # MIMIR_OAUTH_USERINFO_URL= # MIMIR_OAUTH_CLIENT_ID= # MIMIR_OAUTH_CLIENT_SECRET= # MIMIR_OAUTH_CALLBACK_URL= # MIMIR_OAUTH_ALLOW_HTTP= # --- Advanced Configuration --- MIMIR_PARALLEL_EXECUTION=false MIMIR_INSTALL_DIR=/app MIMIR_AGENTS_DIR=/app/docs/agents # --- PCTX Integration (Code Mode) --- PCTX_URL=http://host.docker.internal:8080 PCTX_ENABLED=false # --- NornicDB Configuration (if using NornicDB instead of Neo4j) --- # NORNICDB_NO_AUTH=true

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/orneryd/Mimir'

If you have feedback or need assistance with the MCP directory API, please join our Discord server