Skip to main content
Glama
orneryd

M.I.M.I.R - Multi-agent Intelligent Memory & Insight Repository

by orneryd
env.local841 B
# NornicDB Local GGUF Embedding Configuration # Source this file before running NornicDB: # source env.local && ./nornicdb_local serve # Embedding provider: local uses GGUF models via llama.cpp export NORNICDB_EMBEDDING_PROVIDER=local # Model name (resolves to $NORNICDB_MODELS_DIR/{name}.gguf) export NORNICDB_EMBEDDING_MODEL=bge-m3 # Embedding dimensions (BGE-M3 uses 1024) export NORNICDB_EMBEDDING_DIMENSIONS=1024 # Models directory export NORNICDB_MODELS_DIR=/Users/c815719/src/Mimir/nornicdb/data/models # GPU acceleration: -1=auto (all layers to GPU), 0=CPU only, N=N layers # For Apple Silicon Mac, -1 uses Metal acceleration export NORNICDB_EMBEDDING_GPU_LAYERS=-1 # Server ports export NORNICDB_BOLT_PORT=7687 export NORNICDB_HTTP_PORT=7474 # Data directory export NORNICDB_DATA_DIR=/Users/c815719/src/Mimir/nornicdb/data

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/orneryd/Mimir'

If you have feedback or need assistance with the MCP directory API, please join our Discord server