Analyzes git repository history to provide code context signals, including commit activity patterns, file change tracking, and development momentum metrics for intelligent code ranking and relevance scoring.
tenets
MCP server for context that feeds your prompts.
Intelligent code context aggregation + automatic guiding principles injection—100% local.
Coverage note: Measures core modules (distiller, ranking, MCP, CLI, models). Optional features (viz, language analyzers) are excluded.
tenets is an MCP server for AI coding assistants. It solves two critical problems:
Intelligent Code Context — Finds, ranks, and aggregates the most relevant code using NLP (BM25, TF-IDF, import centrality, git signals). No more manual file hunting.
Automatic Guiding Principles — Injects your tenets (coding standards, architecture rules, security requirements) into every prompt automatically. Prevents context drift in long conversations.
Integrates natively with Cursor, Claude Desktop, Windsurf, VS Code via Model Context Protocol. Also ships a CLI and Python library. 100% local processing — no API costs, no data leaving your machine.
What is tenets?
Finds all relevant files automatically using NLP analysis
Ranks them by importance using BM25, TF-IDF, ML embeddings, and git signals
Aggregates them within your token budget with intelligent summarizing
Injects guiding principles (tenets) automatically into every prompt for consistency
Integrates natively with AI assistants via Model Context Protocol (MCP)
Pins critical files per session for guaranteed inclusion
Transforms content on demand (strip comments, condense whitespace, or force full raw context)
MCP-first Quickstart (recommended)
Install + start MCP server
pip install tenets[mcp] tenets-mcpClaude Code (CLI / VS Code extension)
claude mcp add tenets -s user -- tenets-mcpOr manually add to
~/.claude.json:{ "mcpServers": { "tenets": { "type": "stdio", "command": "tenets-mcp", "args": [] } } }Claude Desktop (macOS app -
~/Library/Application Support/Claude/claude_desktop_config.json){ "mcpServers": { "tenets": { "command": "tenets-mcp" } } }Cursor (
~/.cursor/mcp.json){ "mcpServers": { "tenets": { "command": "tenets-mcp" } } }Windsurf (
~/.windsurf/mcp.json){ "tenets": { "command": "tenets-mcp" } }VS Code Extension (alternative for VS Code users)
Or search "Tenets MCP Server" in VS Code Extensions
Extension auto-starts the server and provides status indicator + commands
Docs (full tool list & transports): https://tenets.dev/MCP/
Installation (CLI/Python)
Important: The [mcp] extra is required for MCP server functionality. Without it:
The
tenets-mcpexecutable exists but will fail when you try to run itMissing dependencies:
mcp,sse-starlette,uvicorn(15 additional packages)You'll get a clear error:
ImportError: MCP dependencies not installed
MCP Tool Surface (AI assistants)
Start the MCP server
pip install tenets[mcp] tenets-mcpCursor (
~/.cursor/mcp.json){ "mcpServers": { "tenets": { "command": "tenets-mcp" } } }Claude Desktop (
~/Library/Application Support/Claude/claude_desktop_config.json){ "mcpServers": { "tenets": { "command": "tenets-mcp" } } }Tools exposed:
distill,rank,examine,session_*,tenet_*(same surface as CLI).Docs: see
docs/MCP.mdfor full endpoint/tool list, SSE/HTTP details, and IDE notes.
MCP Server (AI assistant integration)
Once you start tenets-mcp and drop one of the configs above into your IDE, ask your AI:
“Use tenets to find the auth code” (calls
distill)“Pin src/auth to session auth-feature” (calls
session_pin_folder)“Rank files for the payment bug” (calls
rank_files)
See MCP docs for transports (stdio/SSE/HTTP), tool schemas, and full examples.
Quick Start
Three Ranking Modes
Tenets offers three modes that balance speed vs. accuracy for both distill and rank commands:
Mode | Speed | Accuracy | Use Case | What It Does |
fast | Fastest | Good | Quick exploration | Keyword & path matching, basic relevance |
balanced | 1.5x slower | Better | Most use cases (default) | BM25 scoring, keyword extraction, structure analysis |
thorough | 4x slower | Best | Complex refactoring | ML semantic similarity, pattern detection, dependency graphs |
Core Commands
distill - Build Context with Content
rank - Preview Files Without Content
Sessions & Guiding Principles (Tenets)
The killer feature: define guiding principles once, and they're automatically injected into every prompt.
Why this matters: In long AI conversations, context drifts. The AI forgets your coding standards. Tenets solve this by re-injecting your rules every time.
Other Commands
Configuration
Create .tenets.yml in your project:
How It Works
Code analysis intelligence
tenets employs a multi-layered approach optimized specifically for code understanding (but its core functionality could be applied to any field of document matching). It tokenizes camelCase and snake_case identifiers intelligently. Test files are excluded by default unless specifically mentioned in some way. Language-specific AST parsing for 15+ languages is included.
Multi-ranking NLP
Deterministic algorithms in balanced work reliably and quickly meant to be used by default. BM25 scoring prevents biasing of files which may use redundant patterns (test files with which might have "response" referenced over and over won't necessarily dominate searches for "response").
The default ranking factors consist of: BM25 scoring (25% - statistical relevance preventing repetition bias), keyword matching (20% - direct substring matching), path relevance (15%), TF-IDF similarity (10%), import centrality (10%), git signals (10% - recency 5%, frequency 5%), complexity relevance (5%), and type relevance (5%).
Smart Summarization
When files exceed token budgets, tenets intelligently preserves:
Function/class signatures
Import statements
Complex logic blocks
Documentation and comments
Recent changes
ML / deep learning embeddings
Semantic understand can be had with ML features: pip install tenets[ml]. Enable with --ml --reranker flags or set use_ml: true and use_reranker: true in config.
In thorough mode, sentence-transformer embeddings are enabled, and understand that authenticate() and login() are conceptually related for example, and that payment even has some crossover in relevancy (since these are typically associated together).
Optional cross-encoder neural re-ranking in this mode jointly evaluates query-document pairs with self-attention for superior accuracy.
A cross-encoder, for example, will correctly rank "DEPRECATED: We no longer implement oauth2" lower than implement_authorization_flow() for query "implement oauth2", understanding the negative context despite keyword matches.
Since cross-encoders process document-query pairs together (O(n²) complexity), they're much slower than bi-encoders and only used for re-ranking top K results.
Documentation
Full Documentation - Complete guide and API reference
CLI Reference - All commands and options
Configuration Guide - Detailed configuration options
Architecture Overview - How tenets works internally
Output Formats
Python API
Supported Languages
Specialized analyzers for Python, JavaScript/TypeScript, Go, Java, C/C++, Ruby, PHP, Rust, and more. Configuration and documentation files are analyzed with smart heuristics for YAML, TOML, JSON, Markdown, etc.
Contributing
See CONTRIBUTING.md for guidelines.
License
MIT License - see LICENSE for details.
Documentation · MCP Guide · Privacy · Terms