Memory Forensics MCP Server
AI-powered memory dump analysis using Volatility 3 for digital forensics investigations. Enables process analysis, malware detection, network forensics, timeline generation, and anomaly detection with support for Claude, Llama, and other LLMs.
README
Memory Forensics MCP Server
AI-powered memory analysis using Volatility 3 and MCP.
Features
Core Forensics
- Process Analysis: List processes, detect hidden processes, analyze process trees
- Code Injection Detection: Identify malicious code injection using malfind
- Network Analysis: Correlate network connections with processes
- Command Line Analysis: Extract process command lines
- DLL Analysis: Examine loaded DLLs per process
Advanced Capabilities
- Command Provenance: Full audit trail of all Volatility commands executed
- File Integrity: MD5/SHA1/SHA256 hashing of memory dumps
- Timeline Analysis: Chronological event ordering for incident reconstruction
- Anomaly Detection: Automated detection of suspicious process behavior
- Multi-Format Export: JSON, CSV, and HTML report generation
- Process Extraction: Extract detailed process information for offline analysis
Architecture
Memory Dump -> Volatility 3 -> SQLite Cache -> MCP Server -> LLM Client
(Claude Code/Local LLM)
LLM Compatibility
This MCP server works with any LLM The server is LLM-agnostic and communicates via the Model Context Protocol (MCP).
Supported LLMs
| LLM | Client | Best For |
|---|---|---|
| Claude (Opus/Sonnet) | Claude Code | Higher quality analysis |
| Llama (via Ollama) | Custom client (included) | Local/offline LLM setup, confidential investigations |
| GPT-4 | Custom client | OpenAI ecosystem users |
| Mistral, Phi, others | Custom client | Custom configs |
Quick Setup by LLM
Claude (Easiest):
- Official Claude Code client with native tool calling support
- Uses
~/.claude/mcp.jsonconfiguration - See Quick Start section below for setup instructions
Llama / Ollama:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama3.1:70b
# Start Ollama
ollama serve
# Run the included client
cd examples
pip install -r requirements.txt
python ollama_client.py
Custom LLM:
- See
examples/ollama_client.pyfor reference implementation - Adapt to your LLM's API
- Full guide: MULTI_LLM_GUIDE.md
LLM Profiles
Optimize tool descriptions for different LLM capabilities:
# For Llama 3.1 70B+
export MCP_LLM_PROFILE=llama70b
# For smaller models (8B-13B)
export MCP_LLM_PROFILE=llama13b
# For minimal models
export MCP_LLM_PROFILE=minimal
See MULTI_LLM_GUIDE.md for comprehensive multi-LLM setup instructions.
Quick Start
Prerequisites
- Python 3.8+
- Volatility 3 installed and accessible
- Memory dumps (supported formats: .zip, .raw, .mem, .dmp, .vmem)
Installation
-
Clone or download this repository:
cd /path/to/your/projects git clone <repository-url> cd memory-forensics-mcp -
Create virtual environment:
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate -
Install dependencies:
pip install -r requirements.txtThis installs all required dependencies including Volatility 3 from PyPI.
-
Configure memory dumps directory (edit
config.py):# Set your memory dumps directory DUMPS_DIR = Path("/path/to/your/memdumps")
Advanced: Using Custom Volatility 3 Installation
If you need to use a custom Volatility 3 build (e.g., bleeding edge from git):
# Set environment variable
export VOLATILITY_PATH=/path/to/custom/volatility3
# Or edit config.py directly
# The system will automatically detect and use your custom installation
Configure for Claude Code
Add to ~/.claude/mcp.json:
{
"mcpServers": {
"memory-forensics": {
"command": "/absolute/path/to/memory-forensics-mcp/venv/bin/python",
"args": ["/absolute/path/to/memory-forensics-mcp/server.py"]
}
}
}
Replace /absolute/path/to/memory-forensics-mcp with your actual installation path.
Basic Usage with Claude Code
# Start Claude Code
claude
# Example commands:
"List available memory dumps"
"Process the Win11Dump memory dump"
"Get metadata and hashes for Win11Dump"
"Detect anomalies in Win11Dump"
"Generate a timeline for Win11Dump"
"Export data to JSON format"
Basic Usage with Ollama
# In one terminal: Start Ollama
ollama serve
# In another terminal: Run the MCP client
cd examples
export MCP_LLM_PROFILE=llama70b
python ollama_client.py
Available Tools
Core Analysis (8 tools)
| Tool | Description |
|---|---|
list_dumps |
List available memory dumps |
process_dump |
Process a dump with Volatility 3 |
list_processes |
List all processes |
analyze_process |
Deep dive into specific process |
detect_code_injection |
Find injected code |
network_analysis |
Analyze network connections |
detect_hidden_processes |
Find rootkit-hidden processes |
get_process_tree |
Show parent-child relationships |
Advanced Features (6 tools)
| Tool | Description |
|---|---|
get_dump_metadata |
Get file hashes, OS info, and statistics |
export_data |
Export to JSON, CSV, or HTML formats |
get_command_history |
View full command provenance/audit trail |
generate_timeline |
Create chronological event timeline |
detect_anomalies |
Find suspicious process behavior |
extract_process |
Extract detailed process info to file |
Workflow
Standard Investigation
- List dumps: See what memory dumps are available
- Process dump: Extract artifacts using Volatility 3 (this takes time!)
- Get metadata: View file hashes and dump statistics
- Detect anomalies: Automated suspicious behavior detection
- Generate timeline: Understand the sequence of events
- Export results: Save findings in JSON/CSV/HTML format
Example Investigation
User: "Analyze the Win11Dump memory dump for signs of compromise"
AI: "Let me process that dump first..."
> Calls process_dump("Win11Dump")
> Processes 103 processes, 101 network connections
AI: "Let me check for anomalies..."
> Calls detect_anomalies("Win11Dump")
AI: "Found critical anomalies:
- cmd.exe spawned by winword.exe (possible macro exploit)
- Process 'svch0st.exe' similar to 'svchost.exe' (typosquatting)
Let me check the timeline..."
> Calls generate_timeline("Win11Dump")
AI: "Timeline shows suspicious sequence:
1. winword.exe opens document
2. cmd.exe spawns 30 seconds later
3. powershell.exe downloads from external IP
Let me analyze that PowerShell process..."
> Calls analyze_process("Win11Dump", 2048)
AI: "This PowerShell process has:
- Unbacked executable memory (code injection detected)
- Network connection to suspicious IP 192.0.2.100
- Base64 encoded command line
This indicates a fileless malware infection via macro exploit.
Exporting full report..."
> Calls export_data("Win11Dump", format="html")
AI: "Complete analysis exported to Win11Dump_report.html"
Data Storage
- Dumps: Configured via
DUMPS_DIRinconfig.py(default:<project-dir>/dumps/) - Cache:
<install-dir>/data/artifacts.db(SQLite database) - Exports:
<install-dir>/data/exports/(JSON, CSV, HTML reports) - Extracted Files:
<install-dir>/data/extracted/(extracted process data) - Temp extractions:
/tmp/memdump_*(auto-cleaned)
Using with Local LLMs
The MCP server works with any LLM via the Model Context Protocol. For local analysis:
Quick Start with Ollama
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull Llama model
ollama pull llama3.1:70b
# Start Ollama server
ollama serve
# In another terminal, run the included client
cd /path/to/memory-forensics-mcp/examples
pip install -r requirements.txt
python ollama_client.py
Customization
- Example client: See
examples/ollama_client.pyfor a complete reference implementation - LLM profiles: Use
MCP_LLM_PROFILEenvironment variable to optimize for different model sizes - Full guide: See MULTI_LLM_GUIDE.md for comprehensive setup instructions for Llama, GPT-4, and other LLMs
Benefits of local LLMs:
- Complete privacy - no data sent to cloud services
- Free to use after initial setup (no API costs)
- Suitable for confidential investigations and offline environments
Performance Notes
- Initial processing of a dump (2-3 GB) takes 5-15 minutes
- Results are cached in SQLite for instant subsequent queries
- Consider processing dumps offline, then analyze interactively
Troubleshooting
"Volatility import error"
- Ensure volatility3 is installed:
pip install -r requirements.txt - For custom installations, check VOLATILITY_PATH environment variable or config.py
- Verify import works:
python -c "import volatility3; print('OK')"
"No dumps found"
- Check
DUMPS_DIRinconfig.py - Supported formats: .zip, .raw, .mem, .dmp, .vmem
"Processing very slow"
- Normal for large dumps
- Consider running
process_dumponce, then all queries are fast - Use smaller test dumps for development
License
This is a research/educational tool. Ensure you have authorization before analyzing any memory dumps.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.