Skip to main content
Glama
rushikeshmore

codecortex

CodeCortex

Persistent codebase knowledge layer for AI agents. Your AI shouldn't re-learn your codebase every session.

Website · npm · GitHub

The Problem

Every AI coding session starts from scratch. When context compacts or a new session begins, the AI re-scans the entire codebase. Same files, same tokens, same wasted time. It's like hiring a new developer every session who has to re-learn everything before writing a single line.

The data backs this up:

The Solution

CodeCortex pre-digests codebases into layered knowledge files and serves them to any AI agent via MCP. Instead of re-understanding your codebase every session, the AI starts with knowledge.

Hybrid extraction: tree-sitter native N-API for structure (symbols, imports, calls across 28 languages) + host LLM for semantics (what modules do, why they're built that way). Zero extra API keys.

Quick Start

# Install
npm install -g codecortex-ai

# Initialize knowledge for your project
cd /path/to/your-project
codecortex init

# Start MCP server (for AI agent access)
codecortex serve

# Check knowledge freshness
codecortex status

Connect to Claude Code

Add to your MCP config:

{
  "mcpServers": {
    "codecortex": {
      "command": "codecortex",
      "args": ["serve"],
      "cwd": "/path/to/your-project"
    }
  }
}

What Gets Generated

All knowledge lives in .codecortex/ as flat files in your repo:

.codecortex/
  cortex.yaml          # project manifest
  constitution.md      # project overview for agents
  overview.md          # module map + entry points
  graph.json           # dependency graph (imports, calls, modules)
  symbols.json         # full symbol index (functions, classes, types...)
  temporal.json        # git coupling, hotspots, bug history
  modules/*.md         # per-module deep analysis
  decisions/*.md       # architectural decision records
  sessions/*.md        # session change logs
  patterns.md          # coding patterns and conventions

Six Knowledge Layers

Layer

What

File

1. Structural

Modules, deps, symbols, entry points

graph.json + symbols.json

2. Semantic

What each module does, data flow, gotchas

modules/*.md

3. Temporal

Git behavioral fingerprint - coupling, hotspots, bug history

temporal.json

4. Decisions

Why things are built this way

decisions/*.md

5. Patterns

How code is written here

patterns.md

6. Sessions

What changed between sessions

sessions/*.md

The Temporal Layer

This is the killer differentiator. The temporal layer tells agents "if you touch file X, you MUST also touch file Y" even when there's no import between them. This comes from git co-change analysis, not static code analysis.

Example from a real codebase:

  • routes.ts and worker.ts co-changed in 9/12 commits (75%) with zero imports between them

  • Without this knowledge, an AI editing one file would produce a bug 75% of the time

MCP Tools (14)

Read Tools (9)

Tool

Description

get_project_overview

Constitution + overview + graph summary

get_module_context

Module doc by name, includes temporal signals

get_session_briefing

Changes since last session

search_knowledge

Keyword search across all knowledge

get_decision_history

Decision records filtered by topic

get_dependency_graph

Import/export graph, filterable

lookup_symbol

Symbol by name/file/kind

get_change_coupling

What files must I also edit if I touch X?

get_hotspots

Files ranked by risk (churn x coupling)

Write Tools (5)

Tool

Description

analyze_module

Returns source files + structured prompt for LLM analysis

save_module_analysis

Persists LLM analysis to modules/*.md

record_decision

Saves architectural decision to decisions/*.md

update_patterns

Merges coding pattern into patterns.md

report_feedback

Agent reports incorrect knowledge for next analysis

CLI Commands

Command

Description

codecortex init

Discover project + extract symbols + analyze git history

codecortex serve

Start MCP server (stdio transport)

codecortex update

Re-extract changed files, update affected modules

codecortex status

Show knowledge freshness, stale modules, symbol counts

Token Efficiency

CodeCortex uses a three-tier memory model to minimize token usage:

Session start (HOT only):           ~4,300 tokens
Working on a module (+WARM):         ~5,000 tokens
Need coding patterns (+COLD):        ~5,900 tokens

vs. raw scan of entire codebase:    ~37,800 tokens

85-90% token reduction. 7-10x efficiency gain.

Supported Languages (28)

Category

Languages

Web

TypeScript, TSX, JavaScript, Liquid

Systems

C, C++, Objective-C, Rust, Zig, Go

JVM

Java, Kotlin, Scala

.NET

C#

Mobile

Swift, Dart

Scripting

Python, Ruby, PHP, Lua, Bash, Elixir

Functional

OCaml, Elm, Emacs Lisp

Other

Solidity, Vue, CodeQL

Tech Stack

  • TypeScript ESM, Node.js 20+

  • tree-sitter (native N-API) + 28 language grammar packages

  • @modelcontextprotocol/sdk - MCP server

  • commander - CLI

  • simple-git - git integration

  • yaml, zod, glob

License

MIT

-
security - not tested
-
license - not tested
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rushikeshmore/codecortex'

If you have feedback or need assistance with the MCP directory API, please join our Discord server