Stores and queries code graph representations with vector indexing, enabling graph traversal, relationship analysis, and semantic search across parsed TypeScript/NestJS codebases.
Provides deep understanding of NestJS architectural patterns including controllers, services, modules, dependency injection, HTTP endpoints, guards, pipes, and interceptors.
Generates embeddings for semantic code search, powers natural language to Cypher query conversion, and enables vector-based similarity matching for code discovery.
Parses TypeScript projects using AST analysis to extract code entities, relationships, and framework patterns for graph-based code understanding.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Code Graph Contextfind all services that depend on the UserRepository"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Code Graph Context MCP Server
A Model Context Protocol (MCP) server that builds rich code graphs to provide deep contextual understanding of TypeScript codebases to Large Language Models. This server parses your codebase using AST analysis, constructs a comprehensive graph representation in Neo4j, and provides intelligent querying capabilities through semantic search and graph traversal.
Config-Driven & Extensible: Define custom framework schemas to capture domain-specific patterns beyond the included NestJS support. The parser is fully configurable to recognize your architectural patterns, decorators, and relationships.
Features
Multi-Project Support: Parse and query multiple projects in a single database with complete isolation via
projectIdRich Code Graph Generation: Parses TypeScript projects and creates detailed graph representations with AST-level precision
Semantic Search: Vector-based semantic search using OpenAI embeddings to find relevant code patterns and implementations
Natural Language Querying: Convert natural language questions into Cypher queries using OpenAI assistants API
Framework-Aware & Customizable: Built-in NestJS schema with ability to define custom framework patterns via configuration
Weighted Graph Traversal: Intelligent traversal that scores paths based on relationship importance, query relevance, and depth
Workspace & Monorepo Support: Auto-detects Nx, Turborepo, pnpm, Yarn, and npm workspaces
Parallel Parsing: Multi-threaded parsing with configurable worker pool for maximum CPU utilization
Async Parsing: Background parsing with Worker threads for large codebases without blocking the MCP server
Streaming Import: Chunked processing for projects with 100+ files to prevent memory issues
TypeAlias Support: Full parsing of TypeScript type aliases into graph nodes
Incremental Parsing: Only reparse changed files for faster updates
File Watching: Real-time monitoring with automatic incremental graph updates on file changes
Impact Analysis: Assess refactoring risk with dependency analysis (LOW/MEDIUM/HIGH/CRITICAL scoring)
Dead Code Detection: Find unreferenced exports, uncalled private methods, unused interfaces with confidence scoring
Duplicate Code Detection: Identify structural duplicates (identical AST) and semantic duplicates (similar logic via embeddings)
Swarm Coordination: Multi-agent stigmergic coordination through pheromone markers with exponential decay
High Performance: Optimized Neo4j storage with vector indexing for fast retrieval
MCP Integration: Seamless integration with Claude Code and other MCP-compatible tools
Architecture
The MCP server consists of several key components:
Core Components
TypeScript Parser (
src/core/parsers/typescript-parser.ts): Usests-morphto parse TypeScript AST and extract code entitiesGraph Storage (
src/storage/neo4j/neo4j.service.ts): Neo4j integration for storing and querying the code graphEmbeddings Service (
src/core/embeddings/embeddings.service.ts): OpenAI integration for semantic search capabilitiesMCP Server (
src/mcp/mcp.server.ts): Main MCP server providing tools for code analysis
Graph Schema
The system uses a dual-schema approach:
Core Schema: AST-level nodes (Classes, Methods, Properties, Imports, etc.)
Framework Schema: Semantic interpretations (NestJS Controllers, Services, HTTP Endpoints, etc.)
Getting Started
Prerequisites
Node.js >= 18
Neo4j >= 5.23 with APOC plugin
OpenAI API Key (for embeddings and natural language processing)
Docker (recommended for Neo4j setup)
Installation
Choose the installation method that works best for you:
Option 1: NPM Install (Recommended)
Then configure your OpenAI API key in ~/.claude.json:
Option 2: From Source
CLI Commands
The package includes a CLI for managing Neo4j:
Init options:
Alternative Neo4j Setup
If you prefer not to use the CLI, you can set up Neo4j manually:
Docker Compose:
Docker Run:
Neo4j Desktop: Download from neo4j.com/download and install APOC plugin.
Neo4j Aura (Cloud): Create account at neo4j.com/cloud/aura and configure connection URI in env vars.
Verify Installation
After installation, verify everything is working:
Check Neo4j is running:
Test APOC plugin:
Should return a list of APOC functions.
Test MCP server connection:
Should show: code-graph-context: ✓ Connected
Tool Usage Guide
Available Tools
Tool | Description | Best For |
| List all parsed projects in database | Discovery - see available projects and their status |
| Semantic search using vector embeddings | Starting point - find code by describing what you need |
| Explore relationships from a specific node | Deep dive - understand dependencies and connections |
| Analyze what depends on a node | Pre-refactoring - assess blast radius (LOW/MEDIUM/HIGH/CRITICAL) |
| Parse project and build the graph | Initial setup - supports async mode for large projects |
| Monitor async parsing job progress | Monitoring - track background parsing jobs |
| Start file watching for a project | Live updates - auto-update graph on file changes |
| Stop file watching for a project | Resource management - stop monitoring |
| List all active file watchers | Monitoring - see what's being watched |
| Convert natural language to Cypher | Advanced queries - complex graph queries |
| Find unreferenced exports, uncalled methods, unused interfaces | Code cleanup - identify potentially removable code |
| Find structural and semantic code duplicates | Refactoring - identify DRY violations |
| Leave pheromone markers on code nodes | Multi-agent - stigmergic coordination |
| Query pheromones in the code graph | Multi-agent - sense what other agents are doing |
| Bulk delete pheromones | Multi-agent - cleanup after swarm completion |
| Verify database connectivity | Health check - troubleshooting |
Note: All query tools (
search_codebase,traverse_from_node,impact_analysis,natural_language_to_cypher) require aprojectIdparameter. Uselist_projectsto discover available projects.
Tool Selection Guide
list_projects: First step - discover what projects are availablesearch_codebase: Find code by describing what you're looking fortraverse_from_node: Use node IDs from search results to explore relationshipsimpact_analysis: Before refactoring - understand what depends on the code you're changing
Multi-Project Workflow
All query tools require a projectId parameter to ensure project isolation. You can provide:
Project ID:
proj_a1b2c3d4e5f6(auto-generated from path)Project Name:
my-backend(extracted from package.json or directory name)Project Path:
/path/to/my-backend(resolved to project ID)
Typical Workflow:
Pro Tips:
Use project names instead of full IDs for convenience
Run
list_projectsfirst to see what's availableEach project is completely isolated - queries never cross project boundaries
Sequential Workflow Patterns
The MCP tools are designed to work together in powerful workflows. Here are the most effective patterns:
Pattern 1: Discovery → Focus → Deep Dive
Pattern 2: Broad Search → Targeted Analysis
Start Broad: Use
search_codebaseto find relevant starting pointsFocus: Use
traverse_from_nodeto explore specific relationshipsPaginate: Use
skipparameter to explore different sections of the graph
Tool Deep Dive
1. search_codebase - Your Starting Point
Semantic search using vector embeddings. Returns JSON:API normalized response.
Response Structure:
Tips: Use specific domain terms. Node IDs from nodes map can be used with traverse_from_node.
2. traverse_from_node - Deep Relationship Exploration
Explore connections from a specific node with depth, direction, and relationship filtering.
Returns the same JSON:API format as search_codebase.
3. parse_typescript_project - Graph Generation
Purpose: Parse a TypeScript/NestJS project and build the graph database.
Parameters:
Parameter | Type | Default | Description |
| string | required | Path to project root directory |
| string | required | Path to tsconfig.json |
| string | auto | Override auto-generated project ID |
| boolean | true | Clear existing data (false = incremental) |
| boolean | false | Run in background Worker thread |
| enum | "auto" | "auto", "always", or "never" |
| number | 50 | Files per chunk for streaming |
| enum | "auto" | "auto", "nestjs", "vanilla" |
| boolean | false | Start file watching after parse (requires |
| number | 1000 | Debounce delay for watch mode in ms |
Modes:
Standard: Blocks until complete, best for small-medium projects
Async: Returns immediately, use
check_parse_statusto monitorStreaming: Auto-enabled for projects >100 files, prevents OOM
Incremental: Set
clearExisting: falseto only reparse changed filesWatch: Set
watch: trueto automatically update graph on file changes (requires sync mode)
Performance Notes:
Large projects (>1000 files) should use
async: trueStreaming is auto-enabled for projects >100 files
Incremental mode detects changes via mtime, size, and content hash
Worker threads have 30-minute timeout and 8GB heap limit
4. test_neo4j_connection - Health Check
Purpose: Verify database connectivity and APOC plugin availability.
5. detect_dead_code - Code Cleanup Analysis
Find unreferenced exports, uncalled private methods, and unused interfaces.
Returns items with confidence (HIGH/MEDIUM/LOW), category (internal-unused, library-export, ui-component), and reason. Automatically excludes NestJS entry points and common patterns.
6. detect_duplicate_code - DRY Violation Detection
Find structural (identical AST) and semantic (similar embeddings) duplicates.
Returns duplicate groups with similarity score, confidence, category (cross-file, same-file, cross-app), and recommendation.
7. File Watching Tools
Purpose: Monitor file changes and automatically update the graph.
How It Works:
File watcher monitors
.tsand.tsxfiles using native OS eventsChanges are debounced to batch rapid edits
Only modified files are re-parsed (incremental)
Cross-file edges are preserved during updates
Graph updates happen automatically in the background
Resource Limits:
Maximum 10 concurrent watchers
1000 pending events per watcher
Graceful cleanup on server shutdown
8. Swarm Coordination Tools
Purpose: Enable multiple parallel agents to coordinate work through stigmergic pheromone markers in the code graph—no direct messaging needed.
Core Concepts:
Pheromones: Markers attached to graph nodes that decay over time
swarmId: Groups related agents for bulk cleanup when done
Workflow States:
exploring,claiming,modifying,completed,blocked(mutually exclusive per agent+node)Flags:
warning,proposal,needs_review(can coexist with workflow states)
Pheromone Types & Decay:
Type | Half-Life | Use |
| 2 min | Browsing/reading |
| 10 min | Active work |
| 1 hour | Ownership |
| 24 hours | Done |
| Never | Danger |
| 5 min | Stuck |
| 1 hour | Awaiting approval |
| 30 min | Review requested |
Important: Node IDs must come from graph tool responses (search_codebase, traverse_from_node). Never fabricate node IDs—they are hash-based strings like proj_xxx:ClassDeclaration:abc123.
Workflow Example
Tips for Managing Large Responses
Set
includeCode: falsefor structure-only viewSet
summaryOnly: truefor just file paths and statisticsUse
relationshipTypes: ["INJECTS"]to filter specific relationshipsUse
direction: "OUTGOING"or"INCOMING"to focus exploration
Framework Support
NestJS Framework Schema
The server provides deep understanding of NestJS patterns:
Node Types
Controllers: HTTP endpoint handlers with route analysis
Services: Business logic providers with dependency injection mapping
Modules: Application structure with import/export relationships
Guards: Authentication and authorization components
Pipes: Request validation and transformation
Interceptors: Request/response processing middleware
DTOs: Data transfer objects with validation decorators
Entities: Database models with relationship mapping
Relationship Types
Module System:
MODULE_IMPORTS,MODULE_PROVIDES,MODULE_EXPORTSDependency Injection:
INJECTS,PROVIDED_BYHTTP API:
EXPOSES,ACCEPTS,RESPONDS_WITHSecurity:
GUARDED_BY,TRANSFORMED_BY,INTERCEPTED_BY
Example Graph Structure
Configuration
Environment Variables
Variable | Description | Default |
| OpenAI API key for embeddings and LLM | Required |
| Reuse existing OpenAI assistant | Optional |
| Neo4j database URI |
|
| Neo4j username |
|
| Neo4j password |
|
| Neo4j query timeout |
|
| Neo4j connection timeout |
|
| Embedding API timeout |
|
| Assistant API timeout |
|
Parse Options
Customize parsing behavior:
Limitations
Current Limitations
Language Support: Currently supports TypeScript/NestJS only
Framework Support: Primary focus on NestJS patterns (React, Angular, Vue planned)
File Size: Large files (>10MB) may cause parsing performance issues
Memory Usage: Mitigated by streaming import for large projects
Vector Search: Requires OpenAI API for semantic search functionality
Response Size: Large graph traversals can exceed token limits (25,000 tokens max)
Neo4j Memory: Database memory limits can cause query failures on large graphs
Performance Considerations
Large Projects: Use
async: truefor projects with >1000 filesStreaming: Auto-enabled for >100 files to prevent memory issues
Graph Traversal: Deep traversals (>5 levels) may be slow for highly connected graphs
Embedding Generation: Initial parsing with embeddings can take several minutes for large codebases
Neo4j Memory: Recommend at least 4GB RAM allocation for Neo4j with large graphs
Worker Timeout: Async parsing has 30-minute timeout for safety
Known Issues
Complex Type Inference: Advanced TypeScript type gymnastics may not be fully captured
Dynamic Imports: Runtime module loading not tracked in static analysis
Decorator Arguments: Complex decorator argument patterns may not be fully parsed
Troubleshooting
Common Issues
Neo4j Connection Failed
Neo4j Memory Issues
If you encounter errors like "allocation of an extra X MiB would use more than the limit":
Token Limit Exceeded
If responses exceed token limits:
OpenAI API Issues
Parsing Failures
Debug Mode
Enable detailed logging:
Contributing
Fork the repository
Create a feature branch:
git checkout -b feature/amazing-featureCommit your changes:
git commit -m 'Add amazing feature'Push to the branch:
git push origin feature/amazing-featureOpen a Pull Request
Development Setup
License
This project is proprietary software. All rights reserved - see the LICENSE file for details.
Acknowledgments
Model Context Protocol by Anthropic
Neo4j for graph database technology
ts-morph for TypeScript AST manipulation
OpenAI for embeddings and natural language processing
NestJS for the framework patterns and conventions
Support
Create an Issue for bug reports or feature requests
Join the MCP Discord for community support
Check the MCP Documentation for MCP-specific questions