Context Engine MCP Server
The Context Engine MCP Server is a local-first, agent-agnostic platform for workspace indexing, semantic search, AI-assisted planning, code review, memory management, and security validation.
Workspace Indexing & Retrieval
Index 50+ file types for semantic search (
index_workspace,semantic_search,codebase_retrieval)Retrieve full/partial file contents (
get_file) and token-aware context for AI prompts (get_context_for_prompt)Enhance simple prompts into detailed, codebase-aware prompts (
enhance_prompt)Manage index health: check status, reindex, or clear (
index_status,reindex_workspace,clear_index)
Memory Management
Persist preferences, architecture decisions, and project facts across sessions (
add_memory,list_memories)
Planning & Execution
Generate AI-powered implementation plans with dependency graphs, risk assessments, and Mermaid diagrams (
create_plan,refine_plan,visualize_plan)Execute plan steps with AI-generated code changes, preview or apply (
execute_plan)Save, load, list, and delete plans (
save_plan,load_plan,list_plans,delete_plan)Track step-by-step progress and version history; roll back plan versions (
start_step,complete_step,fail_step,view_progress,view_history,compare_plan_versions,rollback_plan)Gate execution with human approval workflows (
request_approval,respond_approval)
Code Review
AI-powered review of diffs or git changes with structured findings (P0–P3 priority, confidence scores) (
review_changes,review_diff,review_git_diff,review_auto)Reactive PR review with parallel execution, session management, commit-aware caching, and telemetry (
reactive_review_pr,get_review_status,pause_review,resume_review,get_review_telemetry)Deterministic invariant checks against YAML-defined rules (
check_invariants) and static analysis via TypeScript/Semgrep (run_static_analysis)
Security & Validation
Detect and mask 15+ secret types before sending content to an LLM (
scrub_secrets)Multi-tier content validation: bracket balancing, JSON structure, TODO detection, hardcoded URLs, and automatic secret scrubbing (
validate_content)
Tool Discovery
Inspect all available server tools and capabilities (
tool_manifest)
Provides comprehensive tools for automatic retrieval and review of code changes, including staged, unstaged, branch, and commit-level diffs.
Enables the visualization of structured execution plans by generating and displaying Mermaid-formatted diagrams.
Integrates with local static analysis tools like tsc to provide deterministic feedback and analysis during the code review process.
Supports deterministic invariant checking using YAML-based rule files to enforce project-specific constraints during reviews.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Context Engine MCP Serverfind where the user authentication logic is implemented"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Context Engine MCP Server
A local-first, agent-agnostic Model Context Protocol (MCP) server for workspace indexing, retrieval, planning, and review workflows, with a setup path that works well for Codex and other OpenAI-powered agents.
New here? Start with the beginner quick start below.
If you want client-specific setup help, see docs/MCP_CLIENT_SETUP.md.
If you are on Windows, see docs/WINDOWS_DEPLOYMENT_GUIDE.md.
Historical docs live in docs/archive/INDEX.md if you need the old planning and migration notes.
OpenAI / Codex Showcase
If you want to see what this project demonstrates for OpenAI-style agent workflows, start here:
Local workspace indexing and retrieval
Review and planning workflows layered on top of the same MCP server
Beginner-friendly install and client setup
Windows support and copy-paste setup examples
AI-agent-friendly instructions for self-setup
Why This Matters
It shows how an OpenAI-powered agent can connect to a real workspace and start using tools right away.
It combines retrieval, review, and planning in one MCP server instead of relying on one-off scripts.
It gives both humans and AI agents a simple, repeatable setup path, which makes demos and onboarding easier.
Fastest demo path:
npm install
npm run build
codex mcp add context-engine -- node dist/index.jsThen in Codex, confirm the tools are visible and try:
use semantic_search to find authentication logicBeginner Quick Start
If you just want to get Context Engine running locally, follow these steps:
Install Node.js 18+.
Clone this repository and open it in a terminal at the repo root.
Install dependencies:
npm installBuild the server:
npm run buildRun the verification checks:
npm run verifyStart the MCP server:
node dist/index.jsBy default, Context Engine now resolves the workspace like this:
explicit
--workspacewinsotherwise it uses the current folder
if you launched from a nested folder inside a git repo, it falls back to the nearest git root
if no git root exists, it stays on the current folder and logs a warning
On first run, if the index is missing or stale, startup can kick off background indexing automatically. The server still starts first, but the first query may be slower until indexing finishes.
Connect It To Your MCP Client
The server speaks MCP over stdio, so most clients can launch it with the same command.
First-Time Setup vs Daily Use
Use this mental model:
First-time setup: Register the MCP server once in your client.
Daily use: Open any repo and let the server resolve the workspace automatically.
Override only when needed: Pass
--workspace <absolute-path>if the client launches from the wrong folder or you want a different repo on purpose.
Codex CLI
codex mcp add context-engine -- node dist/index.jsWindows example
codex mcp add context-engine -- node "D:\GitProjects\context-engine\dist\index.js"Claude Code, Claude Desktop, Cursor, Antigravity
See docs/MCP_CLIENT_SETUP.md for copy-paste config examples for each client.
Ready-to-use sample config files live in examples/mcp-clients/. Optional skill packages for AI workflows live in examples/skills/.
If an AI agent is setting this up
Paste this into the agent if you want it to do the setup for you:
Set up Context Engine MCP for this workspace.
Run
npm installandnpm run build.Register the MCP server once with
node dist/index.js.Confirm the client launches the MCP server from the repo I am working in.
If the client launches from the wrong folder, add
--workspace <absolute-path-to-workspace>as an override.Confirm the server appears in the client and that
tool_manifest()or an equivalent tool list works.Run one quick retrieval test, for example
semantic_search, to confirm the connection is working.If startup says the workspace is unindexed or stale, let the background indexing finish or run
index_workspacemanually.If the client is Codex CLI, use:
codex mcp add context-engine -- node dist/index.js
Startup Behavior
When the server starts without --workspace, it tries to be repo-aware:
repo root launch: uses that repo
nested repo folder launch: upgrades to the nearest git root
non-git folder launch: stays on the current folder and warns clearly
If startup auto-index is enabled, missing or stale workspaces start background indexing automatically.
Operator override:
disable startup auto-index with
CE_AUTO_INDEX_ON_STARTUP=falseforce a specific workspace with
--workspace "D:\path\to\repo"
Architecture
This implementation follows a clean 5-layer architecture:
┌────────────────────────────┐
│ Coding Agents (Clients) │ Layer 4: Codex, Claude, Cursor, etc.
│ Codex | Claude | Cursor │
└────────────▲───────────────┘
│ MCP (tools)
┌────────────┴───────────────┐
│ MCP Interface Layer │ Layer 3: server.ts, tools/
│ (standardized tool API) │
└────────────▲───────────────┘
│ internal API
┌────────────┴───────────────┐
│ Context Service Layer │ Layer 2: serviceClient.ts
│ (query orchestration) │
└────────────▲───────────────┘
│ domain calls
┌────────────┴───────────────┐
│ Retrieval + Review Engine │ Layer 1: local-native runtime
│ (indexing, retrieval) │
└────────────▲───────────────┘
│ storage/state
┌────────────┴───────────────┐
│ Local State / Artifacts │ Layer 5: workspace state + evidence
│ (index, cache, receipts) │
└────────────────────────────┘Layer Responsibilities
Layer 1: local-native indexing, retrieval, review support, and provider orchestration
Layer 2: context assembly, snippet formatting, deduplication, limits, and caching
Layer 3: MCP tools, validation, and request/response contracts
Layer 4: coding agents and MCP clients that consume the tools
Layer 5: persisted index state, caches, rollout receipts, and generated artifacts
Features
MCP Tools
The server exposes tools across these areas:
Core context and retrieval
Memory
Planning and execution
Plan management
Code review
Reactive review
Use tool_manifest() in the MCP server to inspect the current tool inventory directly.
Key Characteristics
Local-first runtime for indexing and retrieval, with OpenAI-backed planning/review workflows layered on top
Agent-agnostic MCP interface
Local-native retrieval provider as the active runtime
Thin
context-engine-mcplauncher for convenience; it starts the same server and does not add featuresPersistent state and evidence artifacts for rollout-proof workflows
Planning, review, and validation workflows built into the server
Optional benchmarking, parity, and governance gates for safer changes
Quick Start
npm install
npm run build
npm run verify
node dist/index.jsOptional validation commands:
npm run ci:check:no-legacy-provider
npm run ci:check:legacy-capability-parity
npm run ci:check:legacy-capability-parity:strictDocumentation Quick Links
Setup: docs/MCP_CLIENT_SETUP.md
Windows Deployment: docs/WINDOWS_DEPLOYMENT_GUIDE.md
Troubleshooting: docs/archive/TROUBLESHOOTING.md
Testing: docs/archive/TESTING.md
Architecture: ARCHITECTURE.md
Memory Operations: docs/MEMORY_OPERATIONS_RUNBOOK.md
All Docs: docs/archive/INDEX.md
Current Status
Retrieval is local-native and index-backed
Planning and review use the OpenAI session path
Legacy-provider references that remain are historical docs, tests, or migration guardrails
Current hardening focuses on fast paths, cancellation, and prompt efficiency rather than provider replacement
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Kirachon/context-engine'
If you have feedback or need assistance with the MCP directory API, please join our Discord server