Claude Context is an MCP plugin that adds semantic code search to Claude Code and other AI coding agents, giving them deep context from your entire codebase.
An MCP server that provides persistent, cross-session memory and team knowledge sharing for AI development workflows. It enables project DNA scanning, semantic search, context budgeting, and git-aware indexing to prevent AI context loss between sessions.
A persistent memory and context management system for AI CLI tools that utilizes a three-layer architecture and semantic search to prevent context loss between sessions. It provides time-aware orientation and smart memory routing to help AI agents maintain project knowledge and architectural decisions.
An open-source memory layer that provides persistent project context and architectural history for AI development tools across multiple platforms and sessions. It enables AI assistants to maintain a shared understanding of codebases while integrating directly with services like Notion for documentation management.
A local MCP server for RAG memory, semantic search, and context optimization using Ollama and SQLite. It serves as a central hub that manages document embeddings, text compression, and proxies calls to other sub-MCP servers.
A high-performance MCP server providing up-to-date documentation for Go, npm, Python, Rust, Docker, Kubernetes, Terraform, and more — fetched from official sources, not training data.
Provides AI assistants with real-time visibility into your codebase's internal libraries, team patterns, naming conventions, and usage frequencies to generate code that matches your team's actual practices.
Provides persistent context management for AI agents by storing and querying semantic information using Upstash Vector DB and Google AI embeddings. It enables semantic search, batch operations, and metadata filtering to help agents retrieve relevant stored knowledge.
Your AI forgets everything between sessions. This fixes that — 98%+ retrieval accuracy, With llm 100% on LongMemEval, 99% token savings. 44 MCP tools. Fully local, zero cost.
An MCP server that preserves LLM context by intercepting large data outputs and returning only concise summaries or relevant sections. It enables efficient sandboxed code execution, file processing, and documentation indexing across multiple programming languages and authenticated CLIs.
Enables schema-aware exploration of JSON data by uploading samples, flattening nested structures, and using heuristic search with token overlap and fuzzy matching to find field paths for target names, accelerating ETL and API onboarding workflows.
Long AI conversations fail in predictable ways. Context-First fixes all four:
Failure Mode What Goes Wrong Context-First Solution
Context Drift AI forgets earlier decisions and intent as the conversation grows context_loop + detect_drift continuously re-anchor every turn
Silent Contradiction New inputs silently overrule established facts — the AI doesn't notice detect_conflicts compares every inp
Enables LLM assistants to store, retrieve, and update user-specific context memory including travel preferences and general information through a chat interface. Provides analytics on tool usage patterns and token costs for continuous improvement.
Provides intelligent code context management and semantic search capabilities for software development, enabling natural language queries to find relevant code snippets, functions, and classes across Python, JavaScript, TypeScript, and SQL codebases.
Enables AI agents to break down complex tasks into manageable pieces using a structured JSON format with task tracking, context preservation, and progress monitoring capabilities.
A CloudFlare Workers-based MCP server that provides semantic memory and journal capabilities with vector search. Enables users to store, search, and retrieve memories and journal entries using AI-powered semantic similarity without any local setup required.
Provides intelligent context management for AI development sessions, allowing users to track token usage, manage conversation context, and seamlessly restore context when reaching token limits.
Builds rich code graphs from TypeScript/NestJS codebases using AST analysis and Neo4j, enabling semantic search, natural language querying, and intelligent graph traversal to provide deep contextual understanding of code relationships and dependencies.
This server provides an API to query Large Language Models using context from local files, supporting various models and file types for context-aware responses.
Enables access to Usage and Billing APIs for managing accounts, products, meters, plans, and usage reporting. Supports operations like creating products/plans, reporting usage, and retrieving billing information.
Provides persistent tool context that survives across Claude Desktop chat sessions, automatically injecting tool-specific rules, syntax preferences, and best practices. Eliminates the need to re-establish context in each new conversation.
A static MCP server that helps AI models maintain tool context across chat sessions, preventing loss of important information and keeping conversations smooth and uninterrupted.
A Model Context Protocol server that provides web content fetching capabilities with robots.txt checking removed, allowing LLMs to retrieve and convert web content to markdown.
Enables AI to automatically search, retrieve, and organize your Cursor chat history across sessions. Supports tagging, nicknames, project-scoped search, and full-text search to maintain context between conversations.
A local Model Context Protocol server designed to share contextual information between an AI and a user. It primarily provides a tool to retrieve the current date and time in ISO 8601 format based on the server's local timezone.
An MCP server that enables processing of massive datasets up to 10M+ tokens using a recursive language model pattern for strategic chunking and analysis. It automates sub-queries and result aggregation using free local inference via Ollama or the Claude API to handle context beyond standard prompt limits.
Enables cross-window context sharing in VS Code by persisting Copilot conversations to SQLite, allowing code discussions from one repository to be accessible in other VS Code windows through smart entity matching and search.
Enables access to Brazil's Central Bank (BCB) Open Data API for payment methods, allowing queries about PIX, card transactions, ATM terminals, interchange rates, and other payment statistics through natural language.
An MCP server implementing Recursive Language Models (RLM) to process arbitrarily large contexts through a programmatic probe, recurse, and synthesize loop. It enables LLMs to perform multi-step investigations and evidence-backed extraction across massive file sets without being limited by standard context windows.
Enables efficient code navigation and retrieval through natural language search, BM25 ranking, and fuzzy matching across multiple programming languages. It drastically reduces token usage by allowing Claude to query specific code symbols and logic instead of reading entire files.
Enables AI-driven semantic code search using Windsurf's reverse-engineered SWE-grep protocol to query local codebases with natural language. It executes local search tools like ripgrep and tree-node-cli to return relevant file paths and line ranges to MCP-compatible clients.
A security wrapper for MCP servers that provides trust-on-first-use pinning, guardrail scanning, and protection against prompt injection attacks. It acts as an intermediary layer to ensure universal compatibility and secure enforcement of server configurations across various MCP host applications.
An MCP server that provides an interface for querying the AtherOS knowledge base through an API, allowing users to create chat sessions and send queries to retrieve information.
Provides semantic code search capabilities that run 100% locally using EmbeddingGemma embeddings. Enables finding code by meaning across 15 file extensions and 9+ programming languages without API costs or sending code to the cloud.
A memory management system that enables AI assistants to store, search, and visualize persistent conversation contexts using a Neo4j graph database. It provides an MCP server for integration with Claude Desktop along with a web-based dashboard for managing relationship-based knowledge.
An MCP Server that provides a conversational interface to the UK Open Banking account information API, allowing agents to interact with bank account data through natural language commands.
A local-first, agent-agnostic MCP server that provides semantic search, persistent memory, and automated code review capabilities for development workflows. It leverages the Auggie SDK to offer advanced tools for codebase indexing, implementation planning, and deterministic static analysis.
Enables semantic search of project documentation using hybrid vector and full-text search with fast and deep query modes for immediate results or complex multi-round synthesis.
Enables AI-driven semantic code search by leveraging Windsurf's reverse-engineered protocol to perform multi-round local searches using natural language. It automatically executes bundled ripgrep and file operations to return relevant code snippets and file paths to MCP-compatible clients.
Provides AI coding assistants with context optimization tools including targeted file analysis, intelligent terminal command execution with LLM-powered output extraction, and web research capabilities. Helps reduce token usage by extracting only relevant information instead of processing entire files and command outputs.