Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Fraim Context MCPsearch for API authentication docs in deep mode"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Fraim Context MCP
Semantic search MCP server for project documentation.
Version: 5.1.0
Status: In Development
Overview
Fraim Context MCP exposes project documentation to LLMs via the Model Context Protocol (MCP). It supports:
Fast mode: Direct cache/search for immediate results
Deep mode: Multi-round synthesis for complex queries
Hybrid search: Vector similarity + full-text search with pgvector
Smart caching: Redis with corpus versioning for cache invalidation
Quick Start
# 1. Setup Doppler
doppler login
doppler setup # Select: fraim-context → dev
# 2. Install dependencies
uv sync
# 3. Verify environment
doppler run -- uv run python scripts/verify_env.py
# 4. Run tests
doppler run -- uv run pytest tests/stage_0/ -vDevelopment
This project uses Test-Driven Development (TDD). See DNA/DEVELOPMENT_PLAN.md for stages.
# Run all tests
doppler run -- uv run pytest tests/ -v
# Run specific stage
doppler run -- uv run pytest tests/stage_0/ -v
# Lint
uv run ruff check src/ tests/
# Type check
uv run mypy src/fraim_mcpArchitecture
LLM Access: Pydantic AI Gateway (unified key for all providers)
Database: PostgreSQL + pgvector (1024-dim embeddings)
Cache: Redis 7.x (native asyncio)
Observability: Logfire (OpenTelemetry)
See DNA/specs/ARCHITECTURE.md for full details.
License
MIT
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.