ContextLattice
ContextLattice is a local-first memory orchestration system for AI systems. It offers the following capabilities:
Health Check: Query the orchestrator's health status to verify it's operational.
Write Memory: Store new memory items by providing a project name, file name, content, and optional topic path for hierarchical organization.
Search Memory: Search contextual memory entries by project and query, with optional filters for agent ID, topic path, grounding info, and retrieval debug details.
Durable Storage: Orchestrates memory writes with outbox fanout to specialized sinks (e.g., Qdrant, Mongo, MindsDB, Letta) targeting 100+ messages/second throughput.
Intelligent Retrieval: Multi-source recall with result merging, ranking, and a learning loop for continuous improvement.
Code Context Enrichment: Reranks code context based on symbol overlap, file-path proximity, and recency.
Agent Task Management: Queue, route, and manage task lifecycles (create, status, replay, recover leases) for external/internal agent runners.
Context Expansion: Dynamically expands agent context with budgeted layers (factual snippets, topic rollups, raw file refs) and async deep escalation.
Telemetry & Maintenance: Access fanout/retention telemetry, clean up low-value memory, and purge telemetry data.
Security Controls: Enforce secret storage policies (redaction, blocking, or allowing) with API key authentication.
Web3 Integration: Supports Web3 messaging surfaces like IronClaw, OpenClaw, and ZeroClaw.
ContextLattice
Why ContextLattice
ContextLattice reduces repeated inference by turning prior project work into high-signal, retrievable context.
Durable memory writes with fanout to specialized stores.
Fast + deep retrieval modes with staged fetch and fail-open continuation.
Rollup-first context to keep token use efficient while preserving drill-down paths to raw artifacts.
Local-first deployment with optional cloud-backed dependencies.
Human + agent UX through HTTP APIs, MCP transport, and operations dashboard.
Architecture (Public v3 lane)
Layer | Primary runtime | Responsibility |
Gateway/API | Go |
|
Retrieval + memory services | Go + Rust | Fast/durable retrieval lanes, rollup handling, memory-bank adapters |
Legacy fallback | Python | Compatibility fallback only (not default hot path) |
Dashboard | TypeScript/Next.js | Console, mindmap, status, billing, setup UX |
Install
Less-technical installers
macOS DMG:
https://github.com/sheawinkler/ContextLattice/releases/latest/download/ContextLattice-macOS-universal.dmgLinux bundle:
https://github.com/sheawinkler/ContextLattice/releases/latest/download/ContextLattice-linux-bootstrap.tar.gzWindows MSI:
https://github.com/sheawinkler/ContextLattice/releases/latest/download/ContextLattice-windows-x64.msi
Developer install
git clone git@github.com:sheawinkler/ContextLattice.git
cd ContextLattice
gmake quickstartQuickstart
Prerequisites
Docker/Compose v2-compatible runtime
macOS, Linux, or Windows (WSL2)
gmake,jq,rg,python3,curl
Launch
1) Configure environment
cp .env.example .env
ln -svf ../../.env infra/compose/.env
gmake quickstartgmake quickstart prompts for a runtime profile and launches with sensible defaults.
If launched from the macOS DMG bootstrap, it also generates:
~/ContextLattice/setup/agent_contextlattice_instructions.md(copied to clipboard)~/ContextLattice/setup/agent_smoke_write_read.md(operator write/read smoke check)
Verify
ORCH_KEY="$(awk -F= '/^CONTEXTLATTICE_ORCHESTRATOR_API_KEY=/{print substr($0,index($0,"=")+1)}' .env)"
curl -fsS http://127.0.0.1:8075/health | jq
curl -fsS -H "x-api-key: ${ORCH_KEY}" http://127.0.0.1:8075/status | jq '.service,.sinks'Runtime Profiles
Profile | Use case | CPU | RAM | Storage |
| Laptop-friendly local usage | 2-4 vCPU | 8-12 GB | 25-80 GB |
| Higher throughput and deeper recall | 6-8 vCPU | 12-20 GB | 100-180 GB |
Core API examples
MCP Tool Contract (Glama-lite / stdio bridge)
The Glama single-container profile exposes three MCP tools with explicit scope:
health: read-only readiness/troubleshooting check (GET /health), no side effects.memory.search: read-only scoped retrieval (POST /memory/search) with lifecycle states (ready|pending|degraded|empty) and optional grounding/debug payloads.memory.write: state-changing durable write (POST /memory/write) with explicit fanout status and warning fields.
All three tools return JSON in both text content and structured payload form for client compatibility.
Write memory
curl -X POST "http://127.0.0.1:8075/memory/write" \
-H "Content-Type: application/json" \
-H "x-api-key: ${ORCH_KEY}" \
-d '{
"projectName": "my_project",
"fileName": "notes/decision.md",
"content": "Switched retrieval_mode to balanced for normal runs.",
"topicPath": "runbooks/retrieval"
}'Read memory
curl -X POST "http://127.0.0.1:8075/memory/search" \
-H "Content-Type: application/json" \
-H "x-api-key: ${ORCH_KEY}" \
-d '{
"project": "my_project",
"query": "retrieval mode decision",
"topic_path": "runbooks/retrieval",
"include_grounding": true
}'Deep read with continuation metadata
curl -X POST "http://127.0.0.1:8075/memory/search" \
-H "Content-Type: application/json" \
-H "x-api-key: ${ORCH_KEY}" \
-d '{
"project": "my_project",
"query": "full architecture context",
"retrieval_mode": "deep",
"include_grounding": true,
"include_retrieval_debug": true
}'Configuration (public-safe essentials)
Set only what you need for normal operation:
CONTEXTLATTICE_ORCHESTRATOR_URL=http://127.0.0.1:8075
CONTEXTLATTICE_ORCHESTRATOR_API_KEY=<set-by-setup>
NEXTAUTH_URL=http://localhost:3000
NEXTAUTH_SECRET=<long-random-secret>
APP_URL=http://localhost:3000For full config reference, use .env.example.
Dashboard
UI:
http://127.0.0.1:3000/consoleMindmap:
http://127.0.0.1:3000/mindmapStatus:
http://127.0.0.1:3000/status
Public vs paid
This repository tracks the public free lane (v3.x).
Advanced premium tuning, proprietary optimization policy, and private commercialization docs live outside this public lane.
Documentation
Website docs:
https://contextlattice.io/Local docs index:
docs/Hugging Face lite deployment:
docs/huggingface-space-lite.md
License
Apache 2.0. See LICENSE.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/sheawinkler/context-lattice'
If you have feedback or need assistance with the MCP directory API, please join our Discord server