hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Integrations
Required for running the Weaviate vector database option, with included scripts for managing the Docker-based Weaviate environment.
Provides access to Git-related information, including commit message formats and conventions through the knowledge base search functionality.
Enables indexing and searching of Markdown (.md, .mdx) files, allowing AI assistants to retrieve information from documentation stored in Markdown format.
Shared Knowledge MCP Server
This is a knowledge base MCP server that can be used in common with various AI assistants (CLINE, Cursor, Windsurf, Claude Desktop). It utilizes Retrieval Augmented Generation (RAG) to realize efficient information search and utilization. By sharing the knowledge base between multiple AI assistant tools, it provides consistent information access.
Features
- A common knowledge base can be used across multiple AI assistants
- High-precision information retrieval using RAG
- Type-safe implementation using TypeScript
- Supports multiple vector stores (HNSWLib, Chroma, Pinecone, Milvus)
- Extensibility through abstracted interfaces
install
setting
The MCP server settings are added to the configuration file of each AI assistant.
VSCode (for CLINE/Cursor)
~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
:
Examples of using Pinecone
Claude Desktop
~/Library/Application Support/Claude/claude_desktop_config.json
:
Example using HNSWLib (default)
Examples of using Weaviate
Note : If you are using Weaviate, you must first start the Weaviate server, which can be done with the following command:
development
Start the development server
Build
Running in Production
Available Tools
rag_search
Search for information in the knowledge base.
Search Request
Usage Example
Basic search:
Advanced Search:
Search Results
Response example
These expanded search capabilities enable LLM to process information more accurately and efficiently. Additional information such as location, document type, abstract, and keywords help LLM to better understand and utilize search results.
structure
- At startup, it reads Markdown files (.md, .mdx) and text files (.txt) in the specified directory.
- Split the document into chunks and vectorize it using the OpenAI API
- Creates a vector index using the selected vector store (default: HNSWLib)
- Returns documents that are highly similar to a search query
Supported Vector Stores
- HNSWLib : A fast vector store stored on the local file system (default)
- Chroma : an open source vector database
- Pinecone : Managed vector database service (API key required)
- Milvus : A large-scale vector search engine
- Weaviate : A schema-first vector database (Docker required)
Each vector store is exposed through an abstracted interface, making it easy to switch between them as needed.
How to navigate the Vector Store environment
HNSWLib (default)
HNSWLib saves the vector store on the local file system, so no special configuration is required.
Vector store reconstruction:
Weaviate
To use Weaviate, you need Docker.
- Start the Weaviate environment:
- Vector store reconstruction:
- Check the status of Weaviate:
- Stopping the Weaviate environment:
- Delete your Weaviate data completely (only if necessary):
Weaviate configuration is managed in the docker-compose.yml
file. By default, the following settings are applied:
- Port: 8080
- Authentication: Anonymous access enabled
- Vectorization module: None (use external padding)
- Data storage: Docker volume (
weaviate_data
)
Configuration options
environmental variables | explanation | Default value |
---|---|---|
KNOWLEDGE_BASE_PATH | Knowledge Base Path (required) | - |
OPENAI_API_KEY | OpenAI API key (required) | - |
SIMILARITY_THRESHOLD | Similarity score threshold for search (0-1) | 0.7 |
CHUNK_SIZE | Chunk size for splitting text | 1000 |
CHUNK_OVERLAP | Chunk overlap size | 200 |
VECTOR_STORE_TYPE | The type of vector store to use ("hnswlib", "chroma", "pinecone", "milvus"). | "hnswlib" |
VECTOR_STORE_CONFIG | Vector store configuration (JSON string) | {} |
license
ISC
contribution
- Fork
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit the changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Create a Pull Request
This server cannot be installed
This server enables AI assistants (CLINE, Cursor, Windsurf, Claude Desktop) to share a common knowledge base through Retrieval Augmented Generation (RAG), providing consistent information access across multiple tools.