Supports storage of Dynatrace-related troubleshooting knowledge, including operational fixes for when Dynatrace fails in environments like Tanzu
Provides persistent knowledge storage and retrieval for GitHub Copilot, enhancing its responses with custom, stored information about development environments, code snippets, and configurations
Allows storing and retrieving access information for Grafana dashboards within development environments
Supports storing and retrieving GraphQL-specific code patterns, test templates, and implementation examples
Enables saving and retrieving URLs and access information for Splunk dashboards, particularly for application logging purposes
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Knowledge Base MCP Serversave that we use Redis for caching session data"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Knowledge Base MCP Server
A mem0-like memory system for GitHub Copilot that provides persistent knowledge storage and retrieval capabilities using local ChromaDB. This MCP server enables GitHub Copilot to save and retrieve contextual information about your development environment, enhancing its responses with persistent knowledge.
Features
π§ Persistent Memory: Save development knowledge, code snippets, and environmental configurations
π Semantic Search: Vector-based similarity search using local embeddings
π·οΈ Smart Categorization: Automatic extraction of technologies, URLs, and memory types
π Local Storage: All data stored locally for corporate compliance
β‘ Fast Retrieval: Sub-500ms search performance
π― GitHub Copilot Integration: Designed specifically for Copilot workflows
π Web UI: Optional Streamlit interface for searching and managing memories
Related MCP server: mem0 MCP Server for Project Management
Memory Types
Environment: Configuration, URLs, dashboard locations
Code Snippet: Code examples, patterns, implementations
Operational: Troubleshooting steps, fixes, operational knowledge
Architectural: Design decisions, patterns, system architecture
Installation
Clone the repository:
git clone <repo-url> cd knowledge-base-mcpInstall dependencies:
pip install -r requirements.txtStart the server:
python kb_server.pyAccess the Web UI (optional):
streamlit run kb_ui.pyThis launches a Streamlit UI at http://localhost:8501 for managing memories.
GitHub Copilot Integration
Configure Claude Desktop (for testing)
Add to your claude_desktop_config.json:
{
"mcpServers": {
"knowledge-base": {
"command": "python",
"args": ["/absolute/path/to/knowledge-base-mcp/kb_server.py"],
"env": {
"KB_DATA_DIR": "/absolute/path/to/knowledge-base-mcp/kb_data"
}
}
}
}VS Code GitHub Copilot Configuration
Add to your VS Code settings or MCP configuration:
{
"mcpServers": {
"knowledge-base": {
"command": "python",
"args": ["/absolute/path/to/knowledge-base-mcp/kb_server.py"],
"env": {
"KB_DATA_DIR": "/absolute/path/to/knowledge-base-mcp/kb_data",
"KB_INITIAL_FILE": "/absolute/path/to/knowledge-base-mcp/initial_knowledge.txt"
}
}
}
}Usage Examples
Saving Memories
In GitHub Copilot, use the kb_save tool:
#kb_save we use splunk on the cloud at https://company.splunkcloud.com
#kb_save when dynatrace fails in tanzu, use DT_DISABLE flag and restart the instance
#kb_save here's our graphql mutation test pattern: ```csharp
[Test]
public async Task TestGraphQLMutation() {
// test code here
}
### Searching Knowledge
GitHub Copilot will automatically search when you ask questions:
"How do I check application logs?" β Copilot calls kb_search("application logs") β Returns Splunk dashboard URL + previous solutions
### Manual Search
You can also explicitly search:
#kb_search graphql testing #kb_search dynatrace troubleshooting #kb_search dashboard urls
## Available Tools
### `kb_save`
Save a memory to the knowledge base.
- **content**: The memory content to save
- **memory_type**: Optional type (environment, code_snippet, operational, architectural)
- **tags**: Optional list of tags for categorization
### `kb_search`
Search for relevant memories.
- **query**: Search query
- **limit**: Maximum results (default: 5)
- **memory_type**: Filter by type
- **include_metadata**: Include detailed metadata
### `kb_list`
List all saved memories.
- **memory_type**: Filter by type
- **limit**: Maximum entries (default: 10)
- **include_content**: Show full content vs summary
### `kb_delete`
Delete a memory by ID.
- **memory_id**: Full or partial memory ID
## Configuration
### Environment Variables
- `KB_DATA_DIR`: Directory for ChromaDB storage (default: `./kb_data`)
- `KB_INITIAL_FILE`: Optional path to initial knowledge file to load on startup
- `KB_UI_PORT`: Port for the Streamlit UI (default: `8501`)
### Initial Knowledge File
You can bootstrap the knowledge base with pre-existing information by providing an initial knowledge file. The file should contain knowledge entries separated by double newlines (`\n\n`).
**Example `initial_knowledge.txt`:**we use splunk on the cloud at https://company.splunkcloud.com for application logging
our grafana dashboard is at https://grafana.internal.com/dashboards
when dynatrace fails in tanzu, use DT_DISABLE flag and restart the instance
here's our graphql test pattern:
[Test]
public async Task TestAPI() {
// test code here
}
**Features:**
- β
Automatic metadata extraction (technologies, URLs, memory types)
- β
Entries marked with `source: initial_knowledge`
- β
Loads only on first startup (won't duplicate entries)
- β
Supports all content types (code, configs, operational knowledge)
### Embedding Model
The server uses `all-MiniLM-L6-v2` by default for local embeddings. This provides:
- Fast inference
- Good semantic understanding
- No external API calls
- Small memory footprint
## Data Storage
All data is stored locally in ChromaDB format:
- **Vector embeddings**: For semantic search
- **Document content**: Raw memory text
- **Metadata**: Extracted technologies, URLs, timestamps, access counts
## Performance
- **Search latency**: < 500ms typical
- **Storage capacity**: 10,000+ memories
- **Memory usage**: ~200MB for model + data
- **Embedding generation**: ~10ms per memory
## Security & Privacy
- β
**Local-only storage**: No cloud dependencies
- β
**No external APIs**: Embeddings generated locally
- β
**File-system permissions**: Standard OS-level access control
- β
**Corporate compliant**: Designed for enterprise environments
## Troubleshooting
### Server Won't Start
- Check Python version (3.9+ required)
- Verify all dependencies installed: `pip install -r requirements.txt`
- Check data directory permissions
### Poor Search Results
- Ensure memories are saved with clear, descriptive content
- Use specific technology keywords
- Try different search terms
### Memory Not Found
- Use `kb_list` to see all saved memories
- Check memory type filters
- Verify memory was actually saved (check for success message)
## Development
### Project Structureknowledge-base-mcp/ βββ kb_server.py # Main MCP server βββ kb_ui.py # Streamlit web interface βββ test_server.py # Functionality tests βββ test_initial_knowledge.py # Initial knowledge loading tests βββ examples.py # Usage demonstrations βββ requirements.txt # Python dependencies βββ initial_knowledge.txt # Example initial knowledge file βββ claude_desktop_config.json # Configuration template βββ README.md # Complete documentation βββ SETUP.md # Quick setup guide βββ PRD-Knowledge-Base-MCP.md # Product requirements βββ kb_data/ # ChromaDB storage (created automatically)
### Adding New Features
The server uses FastMCP for easy tool development:
```python
@mcp.tool()
async def new_tool(param: str) -> str:
"""Tool description."""
# Implementation
return "Result"License
[Add your license here]
Contributing
[Add contribution guidelines here]
Appeared in Searches
- Trending MCP server for memory usage with GitHub Copilot integration
- Claude Desktop with persistent memory and artifact library system
- An MCP for managing documentation, optimizing token usage, and overseeing project development to deployment
- Understanding Context Memory in Chat Systems
- Deploying an MCP service in an offline environment