project-code-intelligence
Provides tools for indexing and searching a Git repository's codebase, enabling AI agents to navigate and understand the code structure.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@project-code-intelligencesearch for all usages of 'validate_token' function"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Project Code Intelligence
Hardware-Accelerated Codebase Mapping
project-code-intelligence indexes a Git repository into Postgres/pgvector and
serves the result through a small stdio MCP server.
The goal is higher-quality agent results: reuse a local code index instead of re-reading the same repository over and over, reducing token and embedding cost while making codebase navigation faster.
It can store:
repository snapshots and file inventory
functions, classes, symbols, docs, config, and other code records
candidate relationships between records
SARIF/static-analysis findings and code-flow steps
semantic embeddings for similarity search
The package is generic by default. Project-specific behavior belongs in code
profiles, with example.py
as the public example.
Quick Start
Use the checkout scripts directly, or install the package into your active Python environment.
cd /path/to/project-code-intelligence
uv sync --extra dev
export PATH="$PWD:$PATH"
pci-doctor --skip-db --embedding skipThe first pci-doctor run prints startup commands that fit the current
machine. Run one of the commands from its Available startup commands section,
then verify the chosen services:
pci-doctor --embedding requiredText-only indexing is available as a fallback for bootstrap, debugging, or
privacy-sensitive environments. In that case, choose the Postgres-only command
and verify with pci-doctor --embedding skip.
Then index a Git repository:
cd /path/to/repo-to-index
pci-index --dry-run
pci-index
pci-mcp-smokeFor that fallback text-only mode, run pci-index --no-embed.
In a brand-new local repository, make an initial commit before scanning so the
indexer has a Git HEAD snapshot.
Installation
For development:
uv sync --extra devFor use from another repository:
uv pip install -e /path/to/project-code-intelligenceWithout uv:
python -m pip install -e /path/to/project-code-intelligenceThe installed console scripts are:
pci-indexpci-doctorpci-mcppci-mcp-smokepci-embedding-benchpci-embedding-server
MCP Setup
Point Codex, Claude Desktop, or another MCP client at pci-mcp:
{
"mcpServers": {
"project-code-intelligence": {
"command": "/path/to/project-code-intelligence/pci-mcp"
}
}
}The default database settings match the local Docker Compose database. Set
PGVECTOR_* only when using a different Postgres/pgvector instance.
For agent-heavy workflows, copy
docs/examples/AGENTS.md into the repository being
indexed so coding assistants know when to use the MCP index.
Embeddings
Embeddings are the expected path for normal use. They are what make the MCP index useful for semantic search instead of only exact text lookup.
Common paths are CPU FastEmbed, AMD Ryzen AI NPU, AMD GPU, NVIDIA GPU, and
remote OpenAI-compatible providers. pci-doctor prints the exact startup
commands that are available on the current machine.
Run pci-doctor to see which paths are available on the current machine:
pci-doctor --embedding requiredpci-index itself does not download models. The Docker Compose embedding
profiles may download models into Docker volumes or ignored local paths.
Remote embedding endpoints receive source-derived text. For private code, use a
local endpoint or a provider you trust, and set
PROJECT_CODE_INTELLIGENCE_ALLOW_REMOTE_EMBEDDING=1 only intentionally.
Docker Compose Profiles
Profiles are runtime choices, not project modes:
Profile | Use when |
none | Postgres/pgvector only, for text search or an external embedding provider. |
| Portable local semantic-search demo with FastEmbed. |
| Experimental AMD Ryzen AI/XDNA NPU embeddings. |
| Experimental AMD ROCm llama.cpp embeddings. |
| Experimental NVIDIA CUDA llama.cpp embeddings. |
List the profiles with:
docker compose config --profilesMost users should start with cpu, then let pci-doctor suggest hardware
specific commands if local acceleration is available.
Docker Lifecycle
Use up -d to start the profile suggested by pci-doctor. Use stop when you
want to pause containers but keep them around:
docker compose stopUse down for normal cleanup. This removes containers and the Compose network
while keeping the local database and downloaded model caches:
docker compose downUse down -v only when you intentionally want a fresh database and fresh
Docker-managed model caches:
docker compose down -vThat deletes the named volumes for Postgres, FastEmbed, Lemonade, and ROCm
runtime caches. It does not delete the bind-mounted ./models directory used by
the GPU profiles.
On Apple Silicon, Docker Compose is still useful for Postgres/pgvector. Local Apple GPU embeddings should run on the macOS host, not inside Docker.
What the MCP Server Provides
The server exposes tools for:
checking indexed snapshot and embedding status
text and semantic search over indexed records
fetching individual records
following candidate relationships
searching SARIF/static-analysis findings
fetching CodeQL/SARIF code-flow steps
The MCP server runs over stdio. Docker Compose is used for local dependencies, not for wrapping the MCP process.
Project Profiles
The generic profile covers common source, docs, build files, config files, and SARIF input. A project can add its own profile for domain-specific file roles, metadata, records, or security context.
Private profiles do not need to be registered in this package. Put them on
PYTHONPATH and select them with a fully qualified profile path:
PROJECT_CODE_INTELLIGENCE_PROFILE=my_project.code_profile:MyProjectProfile pci-indexProfiles are ordinary Python code, so load them only from trusted local modules.
Development
Run the local quality gate:
make checkRun the integration smoke against a running Compose database:
docker compose up -d pgvector
make integration-smokeUseful docs:
CONTRIBUTING.md: contributor workflow and guardrails
docs/BENCHMARKS.md: local CPU/NPU/GPU benchmark notes
.env.example: available environment variables
AGENTS.md: instructions for assistants working on this repo
Privacy
Do not publish database dumps, restore artifacts, SARIF output, embedding caches, model files, vector indexes, local MCP configs, or generated data from private repositories. These can contain source snippets, internal paths, symbols, findings, metadata, and embeddings derived from source text.
License
MIT. See LICENSE.
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/cvandesande/project-code-intelligence'
If you have feedback or need assistance with the MCP directory API, please join our Discord server