Logica Context
Provides git-aware indexing capabilities, allowing the AI to automatically track current branches, recent commits, staged files, and code changes for context-aware assistance.
Integrates as an MCP server that can be managed and optimized through the MCP aggregator tool, with estimated context cost tracking and usage recommendations.
Enables semantic search capabilities through embeddings, allowing AI to search code and documentation by meaning rather than just keywords when configured with an API key.
Provides persistent cross-session memory and team knowledge base features, enabling shared AI context storage, semantic search with pgvector, and collaborative indexing across development sessions.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Logica Contextscan my current project and show me the stack overview"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Logica Context
Your AI forgets everything after a context reset. Ours doesn't.
Quick Start · What Makes It Different · All 17 Tools · Contributing
The Problem Everyone Has
Every MCP context server does the same thing: runs commands in a sandbox, returns summaries, saves tokens. That's table stakes.
Here's what none of them solve:
Your AI doesn't know your project exists until you explain it. Every. Single. Session.
Knowledge dies when the session ends. Tomorrow you start from zero.
Your teammate indexed the same codebase yesterday. You can't access any of it.
"Find the auth code" returns nothing because the file is called
middleware.ts.You have no idea how much context you've burned. You find out when the AI starts hallucinating.
The AI doesn't know you just pushed 3 commits. It's still referencing yesterday's code.
Logica Context fixes all six.
What Makes It Different
1. Project DNA
Other tools: AI starts every session blind. You waste 5 minutes explaining your stack.
Logica Context: One call to lctx_scan and the AI permanently knows your project — languages, frameworks, dependencies, folder structure, patterns. Indexed. Searchable. Automatic.
> lctx_scan
Project DNA: my-saas-app
Stack: Node.js, TypeScript
Frameworks: Next.js, Prisma, Tailwind CSS, Vitest
Languages: TypeScript (142), JavaScript (23), SQL (8)
Patterns: component-based, page-based routing, API layer, CI/CD
Key files: package.json, tsconfig.json, Dockerfile, .env.exampleThe AI now knows your project before you type a single word.
2. Cross-Session Memory
Other tools: session ends, everything is gone. You re-explain context every morning.
Logica Context: Knowledge persists between sessions via Supabase. Open a new session tomorrow and the AI already knows what you worked on, what you indexed, what you searched for. It picks up where you left off.
No other MCP server does this.
3. Team Knowledge Base
Other tools: each developer is an island. Dev A spends 30 minutes mapping the codebase. Dev B does the same thing an hour later.
Logica Context: lctx_team_push and lctx_team_search. Dev A indexes something, Dev B finds it. Shared knowledge base via Supabase, namespaced by project.
Your whole team builds collective AI memory.
4. Semantic Search
Other tools: keyword matching. Search "authentication" and miss verifyJWT() because the word "auth" isn't in the function name.
Logica Context: lctx_semantic uses real embeddings (Voyage AI or OpenAI) + pgvector for similarity search. Search by meaning, not strings. "How does login work?" finds your JWT middleware, your session store, your OAuth flow.
FTS5 keyword search is still there as the fast default. Semantic search activates when you have an API key.
5. Context Budget
Other tools: you have no idea how much context you've consumed until the AI starts forgetting things.
Logica Context: lctx_budget shows a real-time dashboard:
# Context Budget
[▓▓▓▓▓▓▓▓▓▓▓▓▓▓······] 72% used
Tokens consumed: 144,000
Tokens saved: 38,500
Context limit: 200,000
## Recent Tool Usage
Bash in: 2,400 out: 8,200
Read in: 1,800 out: 4,100
lctx_execute in: 200 out: 6,300
⚠ WARNING: Context 72% full. Use lctx tools to save space.You see exactly where your tokens go. You optimize before it's too late.
6. Git-Aware Indexing
Other tools: the AI doesn't know you just pushed code. It's referencing stale context from 2 hours ago.
Logica Context: lctx_git indexes your current branch, recent commits, staged files, and diff summary. The AI starts every session knowing what changed.
> lctx_git
Branch: feat/auth-refactor
Ahead: 3 commits
## Staged (2)
+ src/middleware/jwt.ts
+ tests/auth.test.ts
## Recent Commits
a3f8b2c refactor: extract JWT validation into middleware
9e1d445 fix: session expiry race condition
2b7a901 test: add auth integration testsNo more "can you check what I changed?" — the AI already knows.
7. MCP Aggregator
Other tools: you have 8 MCP servers loaded. Each one adds tool definitions to the context. You don't know which ones are costing you.
Logica Context: lctx_mcp reads your .mcp.json, lists every active server, estimates their context cost, and recommends which ones to disable.
> lctx_mcp
# MCP Servers
Active: 8
Estimated context cost per cycle: ~3,200 tokens
## Servers
supabase npx @supabase/mcp ~800 tokens
notion npx @notion/mcp ~800 tokens
logica-context npx logica-context ~150 tokens
...
## Recommendations
- 8 MCP servers active. Consider disabling unused ones.
- Heavy servers: supabase, notion. Each call uses ~800 tokens.Quick Start
{
"logica-context": {
"type": "stdio",
"command": "npx",
"args": ["-y", "logica-context"]
}
}Add to .mcp.json. Restart your AI. Done.
Enable Supabase (for features 2, 3, 4)
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_KEY=your-service-keyEnable Semantic Search (feature 4)
VOYAGE_API_KEY=your-key # or OPENAI_API_KEYAll 17 Tools
Core (10 tools — what every context server should do)
Tool | Description |
| Run N commands + N search queries in one call |
| Sandboxed execution (shell, node, python, ruby, go, swift, rust, deno) |
| Process files in sandbox |
| Index text into knowledge base |
| BM25 full-text search with Porter stemming |
| Fetch URL, convert HTML to markdown, auto-index |
| Knowledge base + session statistics |
| Diagnose installation |
| Check for updates |
| Clear knowledge base |
Exclusive (7 tools — what nobody else has)
Tool | Description |
| Project DNA — auto-detect stack, languages, frameworks, patterns |
| Semantic search via embeddings (Voyage AI / OpenAI + pgvector) |
| Context budget — token tracking with visual bar + warnings |
| Git-aware indexing — branch, diff, commits, modified files |
| MCP aggregator — list servers, estimate context cost, optimize |
| Push to team knowledge base (shared via Supabase) |
| Search team knowledge base |
Benchmarks
Scenario | Without | With | Savings |
| ~8,000 tokens | ~200 tokens | 97.5% |
| ~4,000 tokens | ~150 tokens | 96.3% |
| ~12,000 tokens | ~300 tokens | 97.5% |
Fetch 50KB HTML page | ~12,500 tokens | ~500 tokens | 96.0% |
vs. Everything Else
Logica Context | context-mode | Raw tools | |
Sandbox + indexing | Yes | Yes | No |
Project DNA | Yes | No | No |
Cross-session memory | Yes | No | No |
Team knowledge base | Yes | No | No |
Semantic search | Yes | No | No |
Context budget | Yes | No | No |
Git-aware indexing | Yes | No | No |
MCP aggregator | Yes | No | No |
Session continuity | Yes | Yes | No |
Security layer | Yes | Partial | No |
License | MIT | Elastic-2.0 | — |
Platforms
Works with Claude Code, Cursor, Gemini CLI, VS Code Copilot, Codex, Kiro, Zed, and OpenCode. See configs/ for per-platform setup.
Security
Command validation, path restrictions, output sanitization, sandboxed execution with timeouts. See SECURITY.md.
Development
git clone https://github.com/Rovemark/logica-context.git
cd logica-context
npm install && npm test && npm run buildLicense
MIT — use it however you want.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Rovemark/logica-context'
If you have feedback or need assistance with the MCP directory API, please join our Discord server