Analyzes Angular framework patterns, including signals and standalone components, to provide AI agents with context on team-specific architectural conventions.
Identifies and tracks the usage of Jest testing conventions to help AI assistants follow the project's established testing patterns.
Integrates with OpenAI's API to generate cloud-based vector embeddings for semantic code search and indexing.
Monitors the usage frequency of PrimeNG components to ensure AI agents suggest the correct library wrappers or components based on codebase history.
Parses TypeScript codebases to extract pattern frequencies, library usage statistics, and golden file examples for AI-driven development.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Codebase Contextwhat's our team's pattern for dependency injection in Angular components?"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
codebase-context
AI coding agents don't know your codebase. This MCP fixes that.
Your team has internal libraries, naming conventions, and patterns that external AI models have never seen. This MCP server gives AI assistants real-time visibility into your codebase: which libraries your team actually uses, how often, and where to find canonical examples.
Quick Start
Add this to your MCP client config (Claude Desktop, VS Code, Cursor, etc.).
If your environment prompts on first run, use npx --yes ... (or npx -y ...) to auto-confirm.
What You Get
Internal library discovery →
@mycompany/ui-toolkit: 847 uses vsprimeng: 3 usesPattern frequencies →
inject(): 97%,constructor(): 3%Pattern momentum →
Signals: Rising (last used 2 days ago) vsRxJS: Declining (180+ days)Golden file examples → Real implementations showing all patterns together
Testing conventions →
Jest: 74%,Playwright: 6%Framework patterns → Angular signals, standalone components, etc.
Circular dependency detection → Find toxic import cycles between files
Memory system → Record "why" behind choices so AI doesn't repeat mistakes
How It Works
When generating code, the agent checks your patterns first:
Without MCP | With MCP |
Uses | Uses |
Suggests | Uses |
Generic Jest setup | Your team's actual test utilities |
Tip: Auto-invoke in your rules
Add this to your .cursorrules, CLAUDE.md, or AGENTS.md:
Now the agent checks patterns automatically instead of waiting for you to ask.
Tools
Tool | Purpose |
| Semantic + keyword hybrid search |
| Find where a library/component is used |
| Pattern frequencies + canonical examples |
| Project structure overview |
| Indexing progress + last stats |
| Query style guide rules |
| Find import cycles between files |
| Record memory (conventions/decisions/gotchas) |
| Query recorded memory by category/keyword |
| Re-index the codebase |
File Structure
The MCP creates the following structure in your project:
Recommended The vector database and generated files can be large. Add this to your .gitignore to keep them local while sharing team memory:
Memory System
Patterns tell you what the team does ("97% use inject"), but not why ("standalone compatibility"). Use remember to capture rationale that prevents repeated mistakes:
Memories surface automatically in search_codebase results and get_team_patterns responses.
Early baseline — known quirks:
Agents may bundle multiple things into one entry
Duplicates can happen if you record the same thing twice
Edit
.codebase-context/memory.jsondirectly to clean upBe explicit: "Remember this: use X not Y"
Configuration
Variable | Default | Description |
|
|
|
| - | Required if provider is |
| - | Project root to index (CLI arg takes precedence) |
| - | Set to |
Performance Note
This tool runs locally on your machine using your hardware.
Initial Indexing: The first run works hard. It may take several minutes (e.g., ~2-5 mins for 30k files) to compute embeddings for your entire codebase.
Caching: Subsequent queries are instant (milliseconds).
Updates: Currently,
refresh_indexre-scans the codebase. True incremental indexing (processing only changed files) is on the roadmap.
Links
📄 Motivation — Why this exists, research, learnings
📋 Changelog — Version history
🤝 Contributing — How to add analyzers
License
MIT