Shared Knowledge MCP Server

by j5ik2o
Verified

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

Integrations

  • Required for running the Weaviate vector database option, with included scripts for managing the Docker-based Weaviate environment.

  • Provides access to Git-related information, including commit message formats and conventions through the knowledge base search functionality.

  • Enables indexing and searching of Markdown (.md, .mdx) files, allowing AI assistants to retrieve information from documentation stored in Markdown format.

Shared Knowledge MCP Server

This is a knowledge base MCP server that can be used in common with various AI assistants (CLINE, Cursor, Windsurf, Claude Desktop). It utilizes Retrieval Augmented Generation (RAG) to realize efficient information search and utilization. By sharing the knowledge base between multiple AI assistant tools, it provides consistent information access.

Features

  • A common knowledge base can be used across multiple AI assistants
  • High-precision information retrieval using RAG
  • Type-safe implementation using TypeScript
  • Supports multiple vector stores (HNSWLib, Chroma, Pinecone, Milvus)
  • Extensibility through abstracted interfaces

install

git clone https://github.com/yourusername/shared-knowledge-mcp.git cd shared-knowledge-mcp npm install

setting

The MCP server settings are added to the configuration file of each AI assistant.

VSCode (for CLINE/Cursor)

~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json :

{ "mcpServers": { "shared-knowledge-base": { "command": "node", "args": ["/path/to/shared-knowledge-mcp/dist/index.js"], "env": { "KNOWLEDGE_BASE_PATH": "/path/to/your/rules", "OPENAI_API_KEY": "your-openai-api-key", "SIMILARITY_THRESHOLD": "0.7", "CHUNK_SIZE": "1000", "CHUNK_OVERLAP": "200", "VECTOR_STORE_TYPE": "hnswlib" } } } }

Examples of using Pinecone

{ "mcpServers": { "shared-knowledge-base": { "command": "node", "args": ["/path/to/shared-knowledge-mcp/dist/index.js"], "env": { "KNOWLEDGE_BASE_PATH": "/path/to/your/rules", "OPENAI_API_KEY": "your-openai-api-key", "VECTOR_STORE_TYPE": "pinecone", "VECTOR_STORE_CONFIG": "{\"apiKey\":\"your-pinecone-api-key\",\"environment\":\"your-environment\",\"index\":\"your-index-name\"}" } } } }

Claude Desktop

~/Library/Application Support/Claude/claude_desktop_config.json :

Example using HNSWLib (default)

{ "mcpServers": { "shared-knowledge-base": { "command": "node", "args": ["/path/to/shared-knowledge-mcp/dist/index.js"], "env": { "KNOWLEDGE_BASE_PATH": "/path/to/your/docs", "OPENAI_API_KEY": "your-openai-api-key", "SIMILARITY_THRESHOLD": "0.7", "CHUNK_SIZE": "1000", "CHUNK_OVERLAP": "200", "VECTOR_STORE_TYPE": "hnswlib", "VECTOR_STORE_CONFIG": "{}" }, "disabled": false, "autoApprove": [] } } }

Examples of using Weaviate

{ "mcpServers": { "shared-knowledge-base": { "command": "node", "args": ["/path/to/shared-knowledge-mcp/dist/index.js"], "env": { "KNOWLEDGE_BASE_PATH": "/path/to/your/docs", "OPENAI_API_KEY": "your-openai-api-key", "SIMILARITY_THRESHOLD": "0.7", "CHUNK_SIZE": "1000", "CHUNK_OVERLAP": "200", "VECTOR_STORE_TYPE": "weaviate", "VECTOR_STORE_CONFIG": "{\"url\":\"http://localhost:8080\",\"className\":\"Document\",\"textKey\":\"content\"}" }, "disabled": false, "autoApprove": [] } } }

Note : If you are using Weaviate, you must first start the Weaviate server, which can be done with the following command:

./start-weaviate.sh

development

Start the development server

npm run dev

Build

npm run build

Running in Production

npm start

Available Tools

Search for information in the knowledge base.

Search Request

interface SearchRequest { // 検索クエリ(必須) query: string; // 返す結果の最大数(デフォルト: 5) limit?: number; // 検索のコンテキスト(オプション) context?: string; // フィルタリングオプション(オプション) filter?: { // ドキュメントの種類でフィルタリング(例: ["markdown", "code"]) documentTypes?: string[]; // ソースパスのパターンでフィルタリング(例: "*.md") sourcePattern?: string; }; // 結果に含める情報(オプション) include?: { metadata?: boolean; // メタデータを含める summary?: boolean; // 要約を生成 keywords?: boolean; // キーワードを抽出 relevance?: boolean; // 関連性の説明を生成 }; }

Usage Example

Basic search:

const result = await callTool("rag_search", { query: "コミットメッセージのフォーマット", limit: 3 });

Advanced Search:

const result = await callTool("rag_search", { query: "コミットメッセージのフォーマット", context: "Gitの使い方について調査中", filter: { documentTypes: ["markdown"], sourcePattern: "git-*.md" }, include: { summary: true, keywords: true, relevance: true } });

Search Results

interface SearchResult { // 検索クエリに関連する文書の内容 content: string; // 類似度スコア(0-1) score: number; // ソースファイルのパス source: string; // 位置情報 startLine?: number; // 開始行 endLine?: number; // 終了行 startColumn?: number; // 開始桁 endColumn?: number; // 終了桁 // ドキュメントの種類(例: "markdown", "code", "text") documentType?: string; // 追加情報(include オプションで指定した場合のみ) summary?: string; // コンテンツの要約 keywords?: string[]; // 関連キーワード relevance?: string; // 関連性の説明 metadata?: Record<string, unknown>; // メタデータ }

Response example

{ "results": [ { "content": "# コミットメッセージのフォーマット\n\n以下の形式でコミットメッセージを記述してください:\n\n```\n<type>(<scope>): <subject>\n\n<body>\n\n<footer>\n```\n\n...", "score": 0.92, "source": "/path/to/rules/git-conventions.md", "startLine": 1, "endLine": 10, "startColumn": 1, "endColumn": 35, "documentType": "markdown", "summary": "コミットメッセージのフォーマットについての説明文書", "keywords": ["commit", "message", "format", "type", "scope"], "relevance": "このドキュメントは検索クエリ \"コミットメッセージのフォーマット\" に関連する情報を含んでいます。類似度スコア: 0.92" } ] }

These expanded search capabilities enable LLM to process information more accurately and efficiently. Additional information such as location, document type, abstract, and keywords help LLM to better understand and utilize search results.

structure

  1. At startup, it reads Markdown files (.md, .mdx) and text files (.txt) in the specified directory.
  2. Split the document into chunks and vectorize it using the OpenAI API
  3. Creates a vector index using the selected vector store (default: HNSWLib)
  4. Returns documents that are highly similar to a search query

Supported Vector Stores

  • HNSWLib : A fast vector store stored on the local file system (default)
  • Chroma : an open source vector database
  • Pinecone : Managed vector database service (API key required)
  • Milvus : A large-scale vector search engine
  • Weaviate : A schema-first vector database (Docker required)

Each vector store is exposed through an abstracted interface, making it easy to switch between them as needed.

How to navigate the Vector Store environment

HNSWLib (default)

HNSWLib saves the vector store on the local file system, so no special configuration is required.

Vector store reconstruction:

./rebuild-vector-store-hnsw.sh

Weaviate

To use Weaviate, you need Docker.

  1. Start the Weaviate environment:
./start-weaviate.sh
  1. Vector store reconstruction:
./rebuild-vector-store-weaviate.sh
  1. Check the status of Weaviate:
curl http://localhost:8080/v1/.well-known/ready
  1. Stopping the Weaviate environment:
docker-compose down
  1. Delete your Weaviate data completely (only if necessary):
docker-compose down -v

Weaviate configuration is managed in the docker-compose.yml file. By default, the following settings are applied:

  • Port: 8080
  • Authentication: Anonymous access enabled
  • Vectorization module: None (use external padding)
  • Data storage: Docker volume ( weaviate_data )

Configuration options

environmental variablesexplanationDefault value
KNOWLEDGE_BASE_PATHKnowledge Base Path (required)-
OPENAI_API_KEYOpenAI API key (required)-
SIMILARITY_THRESHOLDSimilarity score threshold for search (0-1)0.7
CHUNK_SIZEChunk size for splitting text1000
CHUNK_OVERLAPChunk overlap size200
VECTOR_STORE_TYPEThe type of vector store to use ("hnswlib", "chroma", "pinecone", "milvus")."hnswlib"
VECTOR_STORE_CONFIGVector store configuration (JSON string){}

license

ISC

contribution

  1. Fork
  2. Create a feature branch ( git checkout -b feature/amazing-feature )
  3. Commit the changes ( git commit -m 'Add some amazing feature' )
  4. Push to the branch ( git push origin feature/amazing-feature )
  5. Create a Pull Request
-
security - not tested
F
license - not found
-
quality - not tested

This server enables AI assistants (CLINE, Cursor, Windsurf, Claude Desktop) to share a common knowledge base through Retrieval Augmented Generation (RAG), providing consistent information access across multiple tools.

  1. 特徴
    1. インストール
      1. 設定
        1. VSCode (CLINE/Cursor用)
        2. Pineconeを使用する例
        3. Claude Desktop
      2. 開発
        1. 開発用サーバーの起動
        2. ビルド
        3. 本番環境での実行
      3. 使用可能なツール
        1. rag_search
      4. 仕組み
        1. サポートされているベクトルストア
        2. ベクトルストア環境の操作方法
      5. 設定オプション
        1. ライセンス
          1. 貢献
            ID: ggj8lu8t4i