Skip to main content
Glama
orneryd

M.I.M.I.R - Multi-agent Intelligent Memory & Insight Repository

by orneryd
Dockerfile.llama-cuda1.58 kB
# llama.cpp CUDA static library builder # Build once, reuse forever - saves ~15 min on every NornicDB build # # Build: docker build -f docker/Dockerfile.llama-cuda -t timothyswt/llama-cuda-libs:7285 . # Push: docker push timothyswt/llama-cuda-libs:7285 FROM nvidia/cuda:12.6.3-devel-ubuntu22.04 RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential cmake git ca-certificates && rm -rf /var/lib/apt/lists/* ARG LLAMA_VERSION=b7285 WORKDIR /llama RUN git clone --depth 1 --branch ${LLAMA_VERSION} https://github.com/ggerganov/llama.cpp.git . && \ cmake -B build \ -DLLAMA_STATIC=ON -DBUILD_SHARED_LIBS=OFF \ -DLLAMA_BUILD_TESTS=OFF -DLLAMA_BUILD_EXAMPLES=OFF -DLLAMA_BUILD_SERVER=OFF \ -DLLAMA_CURL=OFF \ -DGGML_CUDA=ON -DGGML_CUDA_FA_ALL_QUANTS=ON \ -DCMAKE_C_FLAGS="-fPIC" -DCMAKE_CXX_FLAGS="-fPIC" \ -DCMAKE_POSITION_INDEPENDENT_CODE=ON && \ cmake --build build --config Release -j$(nproc) # Combine and export RUN mkdir -p /output/lib /output/include && \ find build -name "*.a" -exec cp {} /output/lib/ \; && \ echo "Libraries found:" && ls -la /output/lib/*.a && \ echo "CREATE /output/lib/libllama_linux_amd64_cuda.a" > /tmp/ar.mri && \ for lib in /output/lib/lib*.a; do echo "ADDLIB $lib" >> /tmp/ar.mri; done && \ echo "SAVE" >> /tmp/ar.mri && \ echo "END" >> /tmp/ar.mri && \ cat /tmp/ar.mri && \ ar -M < /tmp/ar.mri && \ cp include/llama.h ggml/include/*.h /output/include/ && \ echo "Combined library:" && ls -lh /output/lib/libllama_linux_amd64_cuda.a

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/orneryd/Mimir'

If you have feedback or need assistance with the MCP directory API, please join our Discord server