Supports using Ollama-hosted local LLMs (Llama, Mistral, Phi, etc.) for AI-powered memory forensics analysis with complete privacy and offline capabilities.
Supports using GPT-4 and other OpenAI models for AI-powered memory forensics analysis through custom client implementations.
Caches Volatility 3 memory forensics analysis results in a SQLite database for instant subsequent queries and data persistence.
Memory Forensics MCP Server
AI-powered memory analysis using Volatility 3 and MCP.
Features
Core Forensics
Process Analysis: List processes, detect hidden processes, analyze process trees
Code Injection Detection: Identify malicious code injection using malfind
Network Analysis: Correlate network connections with processes
Command Line Analysis: Extract process command lines
DLL Analysis: Examine loaded DLLs per process
Advanced Capabilities
Command Provenance: Full audit trail of all Volatility commands executed
File Integrity: MD5/SHA1/SHA256 hashing of memory dumps
Timeline Analysis: Chronological event ordering for incident reconstruction
Anomaly Detection: Automated detection of suspicious process behavior
Multi-Format Export: JSON, CSV, and HTML report generation
Process Extraction: Extract detailed process information for offline analysis
Architecture
LLM Compatibility
This MCP server works with any LLM The server is LLM-agnostic and communicates via the Model Context Protocol (MCP).
Supported LLMs
LLM | Client | Best For |
Claude (Opus/Sonnet) | Claude Code | Higher quality analysis |
Llama (via Ollama) | Custom client (included) | Local/offline LLM setup, confidential investigations |
GPT-4 | Custom client | OpenAI ecosystem users |
Mistral, Phi, others | Custom client | Custom configs |
Quick Setup by LLM
Claude (Easiest):
Official Claude Code client with native tool calling support
Uses
~/.claude/mcp.jsonconfigurationSee Quick Start section below for setup instructions
Llama / Ollama:
Custom LLM:
See
examples/ollama_client.pyfor reference implementationAdapt to your LLM's API
Full guide: MULTI_LLM_GUIDE.md
LLM Profiles
Optimize tool descriptions for different LLM capabilities:
See MULTI_LLM_GUIDE.md for comprehensive multi-LLM setup instructions.
Quick Start
Prerequisites
Python 3.8+
Volatility 3 installed and accessible
Memory dumps (supported formats: .zip, .raw, .mem, .dmp, .vmem)
Installation
Clone or download this repository:
cd /path/to/your/projects git clone <repository-url> cd memory-forensics-mcpCreate virtual environment:
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activateInstall dependencies:
pip install -r requirements.txtThis installs all required dependencies including Volatility 3 from PyPI.
Configure memory dumps directory (edit
config.py):# Set your memory dumps directory DUMPS_DIR = Path("/path/to/your/memdumps")
Advanced: Using Custom Volatility 3 Installation
If you need to use a custom Volatility 3 build (e.g., bleeding edge from git):
Configure for Claude Code
Add to ~/.claude/mcp.json:
Replace /absolute/path/to/memory-forensics-mcp with your actual installation path.
Basic Usage with Claude Code
Basic Usage with Ollama
Available Tools
Core Analysis (8 tools)
Tool | Description |
| List available memory dumps |
| Process a dump with Volatility 3 |
| List all processes |
| Deep dive into specific process |
| Find injected code |
| Analyze network connections |
| Find rootkit-hidden processes |
| Show parent-child relationships |
Advanced Features (6 tools)
Tool | Description |
| Get file hashes, OS info, and statistics |
| Export to JSON, CSV, or HTML formats |
| View full command provenance/audit trail |
| Create chronological event timeline |
| Find suspicious process behavior |
| Extract detailed process info to file |
Workflow
Standard Investigation
List dumps: See what memory dumps are available
Process dump: Extract artifacts using Volatility 3 (this takes time!)
Get metadata: View file hashes and dump statistics
Detect anomalies: Automated suspicious behavior detection
Generate timeline: Understand the sequence of events
Export results: Save findings in JSON/CSV/HTML format
Example Investigation
Data Storage
Dumps: Configured via
DUMPS_DIRinconfig.py(default:<project-dir>/dumps/)Cache:
<install-dir>/data/artifacts.db(SQLite database)Exports:
<install-dir>/data/exports/(JSON, CSV, HTML reports)Extracted Files:
<install-dir>/data/extracted/(extracted process data)Temp extractions:
/tmp/memdump_*(auto-cleaned)
Using with Local LLMs
The MCP server works with any LLM via the Model Context Protocol. For local analysis:
Quick Start with Ollama
Customization
Example client: See
examples/ollama_client.pyfor a complete reference implementationLLM profiles: Use
MCP_LLM_PROFILEenvironment variable to optimize for different model sizesFull guide: See MULTI_LLM_GUIDE.md for comprehensive setup instructions for Llama, GPT-4, and other LLMs
Benefits of local LLMs:
Complete privacy - no data sent to cloud services
Free to use after initial setup (no API costs)
Suitable for confidential investigations and offline environments
Performance Notes
Initial processing of a dump (2-3 GB) takes 5-15 minutes
Results are cached in SQLite for instant subsequent queries
Consider processing dumps offline, then analyze interactively
Troubleshooting
"Volatility import error"
Ensure volatility3 is installed:
pip install -r requirements.txtFor custom installations, check VOLATILITY_PATH environment variable or config.py
Verify import works:
python -c "import volatility3; print('OK')"
"No dumps found"
Check
DUMPS_DIRinconfig.pySupported formats: .zip, .raw, .mem, .dmp, .vmem
"Processing very slow"
Normal for large dumps
Consider running
process_dumponce, then all queries are fastUse smaller test dumps for development
License
This is a research/educational tool. Ensure you have authorization before analyzing any memory dumps.