mcp-deepcontext
Integrates OpenAI's embedding models to enable semantic search capabilities, allowing fuzzy matching on code intent.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@mcp-deepcontextfind all references to the 'createOrder' function"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
mcp-deepcontext
MCP server enabling symbol-aware semantic search in Claude Code
motivation
Most code search tools treat code as text, ignoring the semantic structure that makes code meaningful. When you're working in a large codebase, you don't just want to find where a string appears. You want to find where a function is called, where a type is implemented, or what modules depend on a particular symbol.
This MCP server bridges that gap by exposing semantic code analysis capabilities to Claude. Instead of grepping for text, Claude can ask questions like "where is this interface implemented?" or "what are all the callers of this function?" and get accurate, symbol-aware results. This makes it possible to have actually useful conversations about unfamiliar codebases.
architecture
graph TD
A[Claude Desktop] -->|MCP Protocol| B[mcp-deepcontext Server]
B -->|Parse & Index| C[Symbol Database]
B -->|AST Analysis| D[TypeScript Compiler API]
B -->|Semantic Search| E[Vector Embeddings]
C -->|Symbol Locations| B
D -->|Type Information| B
E -->|Similarity Scores| B
F[Your Codebase] -->|Watch & Reindex| Bgetting started
install
npm install -g mcp-deepcontextquickstart
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"deepcontext": {
"command": "mcp-deepcontext",
"args": ["--workspace", "/path/to/your/project"]
}
}
}Then restart Claude Desktop. The server will index your codebase on startup.
how it works
The server uses the TypeScript compiler API to build a complete symbol graph of your codebase. This includes type definitions, function signatures, class hierarchies, and import relationships. When Claude queries for information, the server can respond with precise locations and context.
For semantic search, the server generates embeddings for code symbols and their surrounding context. This allows for fuzzy matching on intent rather than exact text. The symbol database is kept in sync with file changes through a file watcher that triggers incremental re-indexing.
The MCP protocol exposes this through tools like search_symbols, find_references, get_definition, and find_implementations. Claude can chain these together to answer complex questions about code structure and relationships.
configuration
The server accepts these command-line arguments:
--workspace <path>: Root directory of the project to index (required)--languages <list>: Comma-separated list of languages to index (default:typescript,javascript)--exclude <patterns>: Glob patterns to exclude (default:node_modules,dist,build,.git)--max-file-size <bytes>: Skip files larger than this (default: 1MB)--embedding-model <name>: Model to use for embeddings (default:text-embedding-3-small)
Environment variables:
OPENAI_API_KEY: Required for embedding generationDEEPCONTEXT_LOG_LEVEL: Set todebugfor verbose logging
faq
Q: What languages are supported?
Currently TypeScript and JavaScript with full type awareness. Python and Go support is planned.
Q: Does this work with large codebases?
Yes, the indexing is incremental and the symbol database uses an efficient graph structure. Tested on codebases up to 500k lines without issues.
Q: How much does embedding generation cost?
For a typical 100k line codebase, initial indexing generates about 10k embeddings, costing roughly $0.02 with text-embedding-3-small. Incremental updates are much cheaper.
Q: Can I use this without embeddings?
Yes, pass --no-embeddings to disable semantic search. You'll still get all the symbol-aware tools like find references and go-to-definition.
Q: Does this send my code to external services?
Only the extracted symbol names and their immediate context are sent to OpenAI for embedding generation. You can disable this with --no-embeddings for completely local operation.
license
MIT
This server cannot be installed
Maintenance
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/jmerelnyc/mcp-deepcontext'
If you have feedback or need assistance with the MCP directory API, please join our Discord server