Exposes REST API endpoints for server management, log viewing, memory retrieval, and LLM queries through web integration
Provides local LLM query capabilities through Ollama (llama3 model) with privacy-focused, cost-free AI assistance and context awareness
Allows LLM queries using OpenAI's models (gpt-4o-mini by default) with automatic context injection from development environment and memory system
FGD Fusion Stack Pro - MCP Memory & LLM Integration
A professional Model Context Protocol (MCP) server with intelligent memory management, file monitoring, and multi-LLM provider support. Features a modern PyQt6 GUI for managing your development workspace with persistent memory and context-aware AI assistance.
Table of Contents
Overview
FGD Fusion Stack Pro provides an MCP-compliant server that bridges your local development environment with Large Language Models. It maintains persistent memory of interactions, monitors file system changes, and provides intelligent context to LLM queries.
Key Components:
MCP Server: Model Context Protocol compliant server for tool execution
Memory Store: Persistent JSON-based memory with categories and access tracking
File Watcher: Real-time file system monitoring and change detection
LLM Backend: Multi-provider support (Grok, OpenAI, Claude, Ollama)
PyQt6 GUI: Professional dark-themed interface for management
FastAPI Server: Optional REST API wrapper for web integration
Architecture
Features
MCP Tools
read_file: Read file contents with size limits and path validation
list_files: List files matching glob patterns (limited to prevent overload)
search_in_files: Full-text search across project files
llm_query: Query LLMs with automatic context injection from memory
remember: Store key-value pairs in categorized persistent memory
recall: Retrieve stored memories by key or category
Memory System
Persistent Storage: JSON-based memory file (
.fgd_memory.json)Categories: Organize memories by category (general, llm, file_change, etc.)
Access Tracking: Count how many times each memory is accessed
Timestamps: Track when memories are created
Context Window: Maintains rolling window of recent interactions (configurable limit)
File Monitoring
Watchdog Integration: Real-time file system event monitoring
Change Tracking: Records created, modified, and deleted files
Context Integration: File changes automatically added to context window
Size Limits: Configurable directory and file size limits to prevent overload
GUI Features
Dark Theme: Professional GitHub-inspired dark mode (light mode available)
Live Logs: Real-time log viewing with filtering by level and search
Provider Selection: Easy switching between LLM providers (Grok, OpenAI, Claude, Ollama)
API Key Validation: Automatic fallback to Grok if other API keys are missing
Process Management: Clean start/stop of MCP server with proper cleanup
How It Works
1. Server Initialization
When you start the MCP server:
Configuration Loading: Reads YAML config with watch directory, memory file path, LLM settings
Memory Store Init: Loads existing memories from JSON file or creates new store
File Watcher Setup: Starts watchdog observer on specified directory
MCP Registration: Registers all available tools with the MCP protocol
Log Handler: Sets up file logging to track all operations
2. File System Monitoring
The file watcher continuously monitors your project directory:
3. Memory Lifecycle
Memories persist across sessions and track usage:
Categories Used:
general: User-defined key-value pairsllm: Stores LLM responses for future referencefile_change: Automatic tracking of file modifications (in context window)
LLM Connection & Memory System
How LLM Queries Work
When you call the llm_query tool, here's what happens:
Context Injection Example
Supported LLM Providers
1. Grok (X.AI) - Default Provider
Model:
grok-betaAPI:
https://api.x.ai/v1Key:
XAI_API_KEYenvironment variableFeatures: Fast responses, good code understanding
2. OpenAI
Model:
gpt-4o-mini(configurable)API:
https://api.openai.com/v1Key:
OPENAI_API_KEYenvironment variableFeatures: Excellent reasoning, widely supported
3. Claude (Anthropic)
Model:
claude-3-5-sonnet-20241022API:
https://api.anthropic.com/v1Key:
ANTHROPIC_API_KEYenvironment variableNote: Currently mentioned in config but needs implementation completion
4. Ollama (Local)
Model:
llama3(configurable)API:
http://localhost:11434/v1Key: No API key required (local)
Features: Privacy-focused, no cost, runs locally
Memory Utilization Strategies
The memory system enables:
Conversation Continuity: Previous LLM responses stored and retrievable
File Context Awareness: LLM knows which files were recently modified
Usage Analytics: Access counts help identify frequently referenced information
Session Persistence: Memories survive server restarts
Categorization: Easy filtering of memory types (code, docs, errors, etc.)
Installation
Prerequisites
Python 3.10 or higher
pip package manager
Virtual environment (recommended)
Steps
Clone or download the repository
cd MCPMCreate virtual environment (recommended)
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activateInstall dependencies
pip install -r requirements.txtSet up environment variables Create a
.envfile in the project root:# Required for Grok (default provider) XAI_API_KEY=your_xai_api_key_here # Optional: Only needed if using these providers OPENAI_API_KEY=your_openai_api_key_here ANTHROPIC_API_KEY=your_anthropic_api_key_hereConfigure settings (optional) Edit
config.example.yamlto customize:Default LLM provider
Model names
File size limits
Context window size
Configuration
config.example.yaml
Usage
Option 1: PyQt6 GUI (Recommended)
GUI Workflow:
Click Browse to select your project directory
Choose LLM provider from dropdown (Grok, OpenAI, Claude, Ollama)
Click Start Server to launch MCP backend
View live logs with filtering options
Monitor server status in real-time
GUI automatically:
Generates config file for selected directory
Validates API keys and falls back to Grok if needed
Manages subprocess lifecycle
Provides log filtering by level (INFO, WARNING, ERROR)
Option 2: MCP Server Directly
This starts the MCP server in stdio mode for integration with MCP clients.
Option 3: FastAPI REST Server
Access endpoints at http://localhost:8456:
GET /api/status- Check server statusPOST /api/start- Start MCP serverGET /api/logs?file=fgd_server.log- View logsGET /api/memory- Retrieve all memoriesPOST /api/llm_query- Query LLM directly
API Reference
MCP Tools
read_file
Read contents of a file in the watched directory.
list_files
List files matching a glob pattern.
search_in_files
Search for text across files.
llm_query
Query an LLM with automatic context injection.
remember
Store information in persistent memory.
recall
Retrieve stored memories.
Code Review Findings
Critical Issues Identified
Exception Handling (Priority: HIGH)
Location:
mcp_backend.py:239,local_directory_memory_mcp_refactored.py:38,172,239Issue: Bare
except:clauses without specific exception typesImpact: Can hide bugs and make debugging difficult
Recommendation: Replace with specific exception types or
except Exception as e:
Duplicate Server Implementation (Priority: MEDIUM)
Location:
mcp_backend.pyvslocal_directory_memory_mcp_refactored.pyIssue: Two similar MCP server implementations causing confusion
Recommendation: Consolidate into single implementation or clearly document use cases
Security Concerns (Priority: HIGH)
Path Traversal:
_sanitize()methods need hardeningCORS Policy:
server.py:16-20allows all origins (insecure for production)No Rate Limiting: LLM queries can be abused
Recommendation: Implement stricter validation, CORS whitelist, and rate limiting
Missing Claude Implementation (Priority: MEDIUM)
Location:
mcp_backend.py:109-111Issue: Claude provider configured but not fully implemented
Recommendation: Complete Claude API integration or remove from options
Code Quality Improvements
Type Hints: Add comprehensive type annotations
Error Messages: More descriptive error messages with context
Logging: Add DEBUG level logging for troubleshooting
Documentation: Add docstrings to all public methods
Testing: No unit tests present - recommend adding test suite
Architecture Recommendations
Configuration Management: Use Pydantic for config validation
Graceful Shutdown: Implement proper cleanup on SIGTERM/SIGINT
Health Checks: Add
/healthendpoint to FastAPI serverAuthentication: Add API key authentication for REST endpoints
Monitoring: Add metrics collection (request counts, latency, errors)
Security Best Practices
If deploying in production:
Environment Variables: Never commit
.envfileAPI Keys: Rotate keys regularly, use secret management service
CORS: Whitelist specific origins instead of
*Input Validation: Validate all user inputs and file paths
Rate Limiting: Implement per-user/IP rate limits
TLS: Use HTTPS for all external API communications
Logging: Avoid logging sensitive data (API keys, tokens)
Troubleshooting
Server won't start
Check API key in
.envfileVerify directory permissions for watch_dir
Check if port 8456 is available (for FastAPI)
File watcher not detecting changes
Ensure watch_dir is correctly configured
Check directory isn't too large (>2GB default limit)
Verify sufficient system resources
LLM queries failing
Verify API key is valid and has credits
Check network connectivity to API endpoint
Review logs for detailed error messages
Memory not persisting
Check write permissions on memory_file location
Verify disk space available
Look for errors in logs during save operations
Contributing
Code review identified several improvement opportunities:
Fix bare exception handlers
Add comprehensive test suite
Complete Claude provider implementation
Add type hints throughout
Improve error messages
Consolidate duplicate server implementations
License
[Add your license here]
Support
For issues, questions, or contributions, please [add contact information or repository link].
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
MCP server with intelligent memory management and file monitoring that enables context-aware AI assistance across multiple LLM providers (Grok, OpenAI, Claude, Ollama) with persistent memory of interactions and real-time file system changes.