Skip to main content
Glama

Why LocalNest?

Every other MCP server forces you to choose: memory or code intelligence. Never both.

LocalNest is the first to combine all three pillars into one server that runs entirely on your machine:

Pillar

What it does

Why it matters

Code Intelligence

Hybrid BM25+vector search, AST-aware chunking, symbol finding (defs/usages/callers)

Your AI understands code structure, not just text

Knowledge Graph

Temporal entity-triple store with multi-hop traversal and as_of time-travel queries

Architectural decisions, dependencies, and facts — versioned over time

Persistent Memory

Cross-session recall, semantic dedup, agent-scoped isolation, conversation ingestion

Your AI remembers what you taught it — forever


How LocalNest Compares

No other MCP server covers all three pillars. Here's how the landscape breaks down:

vs Memory-Only Servers

LocalNest

Mem0

Basic Memory

MCP Memory Service

AgentMemory

Persistent AI memory

Yes

Yes

Yes

Yes

Yes

Knowledge graph

Yes

No

No

No

No

Semantic code search

Yes

No

No

No

No

Symbol finding (defs/usages)

Yes

No

No

No

No

AST-aware chunking

Yes

No

No

No

No

Local-first / no cloud

Yes

Hybrid

Yes

Yes

Yes

MCP tools

74

8

~10

24

43

Mem0 has 41k stars and $24M in funding — but it's memory-only with no code intelligence. Basic Memory integrates with Obsidian but can't search code. AgentMemory has auto-capture hooks but zero code features.

vs Code Intelligence Servers

LocalNest

GitNexus

claude-context

codebase-memory-mcp

CodeGraphContext

Semantic code search

Yes

Yes

Yes

Yes

Yes

Knowledge graph

Yes

Code-only

No

Code-only

Yes

Persistent AI memory

Yes

No

No

No

No

Cross-session recall

Yes

No

No

No

No

Symbol finding

Yes

Yes

No

Yes

Yes

Temporal time-travel queries

Yes

No

No

No

No

Conversation ingestion

Yes

No

No

No

No

Local-first / no cloud

Yes

Yes

Partial

Yes

Yes

MCP tools

74

16

~5

14

~10

GitNexus (27k stars) has strong code search but no memory. claude-context (Zilliz, 5.9k stars) is Milvus-backed with no KG or memory. codebase-memory-mcp (DeusData) is the closest competitor — code + KG in a single binary — but has no AI memory layer.

Full Feature Matrix

Feature

LocalNest

codebase-memory-mcp

GitNexus

claude-context

Basic Memory

Mem0

Semantic code search (hybrid BM25+vec)

Yes

Yes

Yes

Yes

No

No

Knowledge graph (entities + triples)

Yes

Code-only

Code-only

No

No

No

Persistent AI memory

Yes

No

No

No

Yes

Yes

Symbol finding (defs/usages/callers)

Yes

Yes

Yes

No

No

No

AST-aware chunking

Yes

Yes

Yes

Yes

No

No

Temporal time-travel queries

Yes

No

No

No

No

No

Multi-hop graph traversal

Yes

No

No

No

No

No

Conversation ingestion

Yes

No

No

No

No

No

Agent-scoped isolation

Yes

No

No

No

No

No

Semantic dedup

Yes

No

No

No

No

No

Hooks system (pre/post callbacks)

Yes

No

No

No

No

No

Interactive TUI dashboard

Yes

No

No

No

No

No

Local-first / no cloud

Yes

Yes

Yes

Partial

Yes

Hybrid

MCP tools

74

14

16

~5

~10

8

Zero external deps

No (Node.js)

Yes (binary)

No

No

No

No

LocalNest is the only server that checks every box in the first three rows.


Quick Start

# Install
npm install -g localnest-mcp

# Setup workspace + embeddings
localnest setup

# Verify
localnest doctor

Interactive dashboard:

localnest dashboard

MCP Client Config

After setup, add this to your AI client config:

{
  "mcpServers": {
    "localnest": {
      "command": "localnest-mcp",
      "startup_timeout_sec": 30,
      "env": {
        "MCP_MODE": "stdio",
        "LOCALNEST_CONFIG": "~/.localnest/config/localnest.config.json",
        "LOCALNEST_INDEX_BACKEND": "sqlite-vec",
        "LOCALNEST_MEMORY_ENABLED": "true"
      }
    }
  }
}

Works with Claude Code, Cursor, Windsurf, Cline, Continue, Gemini CLI, and any MCP-compatible client.


Tool Suites

LocalNest exposes 74 specialized MCP tools, organized into focused suites:

Full parameter reference: Tool Documentation


Agentic Workflows

LocalNest is designed as the foundational context layer for AI coding agents:

  • Cold startagent_prime instantly hydrates the context window with relevant memories, recent changes, and project state.

  • Deep investigationfind runs fused search across code fragments and historical design decisions in a single call.

  • Continuous learningteach saves architectural rules that persist across sessions, ensuring agents never repeat mistakes.

  • Outcome capturecapture_outcome records what worked and what didn't, building an experience base over time.


Enterprise-Grade Quality

  • OIDC Trusted Publishing for verifiable npm provenance

  • Continuous CodeQL static analysis on all branches

  • OpenSSF Scorecard monitoring and proactive Dependabot updates


Troubleshooting

Direct npm install -g git+https://... may fail with TAR_ENTRY_ERRORS. This is a known npm limitation.

Fix: clone, pack, install

git clone https://github.com/wmt-mobile/localnest.git
cd localnest && npm pack
npm install -g ./localnest-mcp-*.tgz
cd $(npm root -g)/localnest-mcp && npm install --no-save @huggingface/transformers
localnest doctor

Resources


Install Server
A
security – no known vulnerabilities
A
license - permissive license
B
quality - B tier

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wmt-mobile/localnest'

If you have feedback or need assistance with the MCP directory API, please join our Discord server