Stem MCP Server
Enables AI-powered audio processing including stem separation, vocal extraction, loop creation, and musical analysis using state-of-the-art Demucs models. Designed for music producers and audio engineers working with Logic Pro and other DAWs.
README
Stem MCP Server 🎵
A comprehensive Model Context Protocol (MCP) server for professional AI-powered audio processing and stem manipulation. Designed specifically for music producers, audio engineers, and Logic Pro users who need advanced audio processing capabilities integrated with AI workflows.
Features 🚀
🎯 Core Audio Processing
- 🎤 AI Stem Generation: State-of-the-art source separation using Demucs models
- ✂️ Smart Audio Splitting: Intelligent segmentation with customizable overlap and fade options
- 🔄 Seamless Loop Creation: Professional loop generation with tempo matching and crossfading
- 📊 Advanced Audio Analysis: Deep musical feature extraction (tempo, key, spectral characteristics)
- 🎯 Precise Instrument Isolation: Extract specific instruments with multiple algorithms
- 🎵 Vocal Processing: Advanced vocal extraction and separation techniques
🎛️ Advanced Features
- 🎪 Multi-Vocal Range Separation: Split vocals into soprano, alto, tenor, bass ranges
- 🎼 Musical Structure Analysis: Detect beats, tempo, key signatures, and harmonic content
- 🔊 Dynamic Range Analysis: RMS energy, peak detection, loudness analysis
- 🎚️ Spectral Processing: Frequency domain analysis and manipulation
- ⚡ Batch Processing: Handle multiple files efficiently
- 🎨 Custom Processing Chains: Combine multiple tools for complex workflows
Supported Audio Formats
- WAV, MP3, FLAC, AAC, M4A, OGG, WMA
AI Models
- Demucs: State-of-the-art source separation models
htdemucs(default): High-quality 4-stem separationhtdemucs_ft: Fine-tuned varianthtdemucs_6s: 6-stem separationmdx: Alternative model architecturemdx_extra: Enhanced MDX model
Installation 🔧
Prerequisites
- Python 3.10 or higher (required for MCP compatibility)
- FFmpeg (for audio processing)
- CUDA-compatible GPU (optional, for faster processing)
Install FFmpeg
# macOS (using Homebrew)
brew install ffmpeg
# Ubuntu/Debian
sudo apt update && sudo apt install ffmpeg
# Windows (using Chocolatey)
choco install ffmpeg
Install the MCP Server
# Clone or create the project
cd stem-mcp
# Install in development mode
pip install -e .
# Or install from requirements
pip install -r requirements.txt
Install Dependencies
# Install core dependencies
pip install mcp>=1.0.0 librosa soundfile numpy scipy torch torchaudio demucs pydub
# For best performance, install with CUDA support
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118
Configuration ⚙️
MCP Client Configuration
Add this to your MCP client configuration (e.g., Claude Desktop):
{
"mcpServers": {
"stem-processing": {
"command": "stem-mcp",
"args": [],
"env": {
"PYTHONPATH": "/path/to/stem-mcp/src"
}
}
}
}
Usage Examples 🎯
1. Generate Stems from Audio
# Using the MCP tool
{
"tool": "generate_stems",
"arguments": {
"audio_path": "/path/to/song.wav",
"output_dir": "./stems",
"model_type": "htdemucs",
"num_stems": 4
}
}
Output: Separates audio into vocals, drums, bass, and other instruments.
2. Split Stems into Segments
{
"tool": "split_stems",
"arguments": {
"stem_path": "./stems/vocals.wav",
"output_dir": "./segments",
"segment_length": 15.0,
"overlap": 2.0
}
}
Output: Creates 15-second segments with 2-second overlap.
3. Create Seamless Loops
{
"tool": "create_loop",
"arguments": {
"audio_path": "./drums.wav",
"loop_duration": 8.0,
"bpm": 120,
"crossfade_duration": 0.2
}
}
Output: Creates an 8-second loop at 120 BPM with smooth crossfading.
4. Analyze Audio Features
{
"tool": "analyze_audio",
"arguments": {
"audio_path": "./song.wav"
}
}
Output:
🎵 Audio Analysis Results:
📊 Basic Info:
Duration: 245.67 seconds
Sample Rate: 44100 Hz
Channels: Stereo
🎵 Musical Features:
Tempo: 128.5 BPM
Estimated Key: G
Beat Count: 523
🔊 Spectral Analysis:
Avg Spectral Centroid: 2847.3 Hz
Avg Spectral Rolloff: 8934.2 Hz
Avg Zero Crossing Rate: 0.0847
Avg RMS Energy: 0.1234
5. Extract Vocals Only
{
"tool": "extract_vocal",
"arguments": {
"audio_path": "./song.wav",
"method": "demucs"
}
}
6. Isolate Specific Instruments
{
"tool": "isolate_instrument",
"arguments": {
"audio_path": "./song.wav",
"instrument": "drums",
"method": "demucs"
}
}
API Reference 📚
Complete Tool Suite
🎤 generate_stems
State-of-the-art AI-powered source separation using Demucs models.
Parameters:
audio_path(string, required): Path to input audio fileoutput_dir(string, optional): Output directory (default: ".")model_type(string, optional): Demucs model type"htdemucs"(default): High-quality 4-stem separation"htdemucs_ft": Fine-tuned variant for enhanced quality"htdemucs_6s": 6-stem separation (vocals, drums, bass, piano, guitar, other)"mdx": Fast processing with good quality"mdx_extra": Enhanced MDX model
num_stems(integer, optional): Number of output stems (2-6, default: 4)
Output: Generates separate audio files for each stem (vocals, drums, bass, other)
✂️ split_stems
Intelligent audio segmentation with customizable parameters.
Parameters:
stem_path(string, required): Path to audio file to splitoutput_dir(string, optional): Output directory (default: ".")segment_length(number, optional): Segment duration in seconds (1-300, default: 30)overlap(number, optional): Overlap between segments in seconds (0-10, default: 0)
Features:
- Smart segment boundary detection
- Customizable overlap for smooth transitions
- Preserves audio quality and metadata
🔄 create_loop
Professional seamless loop creation with advanced crossfading.
Parameters:
audio_path(string, required): Path to input audiooutput_path(string, optional): Output file path (auto-generated if not provided)loop_duration(number, optional): Loop duration in seconds (0.5-60, default: 4)bpm(number, optional): Target BPM (60-200, auto-detected if not specified)crossfade_duration(number, optional): Crossfade length in seconds (0-2, default: 0.1)
Features:
- Automatic tempo detection and matching
- Smart beat-aligned loop points
- Professional crossfading algorithms
- Maintains musical timing and feel
📊 analyze_audio
Comprehensive musical and spectral analysis.
Parameters:
audio_path(string, required): Path to audio file to analyze
Analysis Output:
- Basic Properties: Duration, sample rate, channel configuration
- Musical Features: Tempo (BPM), key signature, beat tracking
- Spectral Analysis: Frequency content, spectral centroid, rolloff
- Dynamic Range: RMS energy levels, peak detection
- Audio Quality: Zero-crossing rate, harmonic content
🎤 extract_vocal
Advanced vocal extraction with multiple algorithms.
Parameters:
audio_path(string, required): Path to input audiooutput_path(string, optional): Output file path (auto-generated if not provided)method(string, optional): Extraction algorithm"demucs"(default): AI-powered high-quality separation"librosa": Traditional signal processing approach"spectral": Frequency domain processing
Features:
- Multiple extraction algorithms for different use cases
- High-quality vocal isolation
- Preserves vocal character and dynamics
🎹 isolate_instrument
Precise instrument isolation using multiple techniques.
Parameters:
audio_path(string, required): Path to input audioinstrument(string, optional): Target instrument"vocals": Lead and backing vocals"drums": Full drum kit"bass": Bass guitar and synthesizers"guitar": Electric and acoustic guitars"piano": Piano and keyboard instruments"other": Remaining instruments
output_path(string, optional): Output file pathmethod(string, optional): Isolation technique"demucs": AI source separation"librosa": Signal processing"spectral": Frequency domain filtering
🎪 separate_vocal_ranges
NEW: Advanced vocal range separation for choir and multi-vocal arrangements.
Parameters:
audio_path(string, required): Path to vocal audio fileoutput_dir(string, optional): Output directory for range files
Output: Separate files for each vocal range:
- Soprano: High female voices (C4-C6)
- Alto: Low female voices (G3-E5)
- Tenor: High male voices (C3-A4)
- Bass: Low male voices (E2-E4)
Features:
- Frequency-based intelligent separation
- Preserves natural vocal characteristics
- Ideal for choir arrangements and vocal analysis
🎵 extract_vocal_harmonies
NEW: Isolate and separate vocal harmonies from lead vocals.
Parameters:
audio_path(string, required): Path to audio with vocal harmoniesoutput_dir(string, optional): Directory for harmony filessensitivity(number, optional): Harmony detection sensitivity (0.1-1.0, default: 0.5)
Features:
- Separates lead vocals from harmonies
- Maintains harmonic relationships
- Perfect for remixing and vocal arrangement analysis
Performance Tips 🚀
Hardware Optimization
- GPU: Use CUDA-compatible GPU for 10x faster processing
- RAM: 16GB+ recommended for processing large files
- Storage: SSD recommended for faster I/O operations
Processing Tips
- File Format: Use WAV or FLAC for best quality
- Sample Rate: 44.1kHz or 48kHz for optimal results
- Batch Processing: Process multiple files in sequence for efficiency
Model Selection
- htdemucs: Best general-purpose model
- htdemucs_6s: Use for 6-stem separation (vocals, drums, bass, piano, guitar, residual)
- mdx: Faster processing, slightly lower quality
Development 😠️
💻 Complete Project Structure
stem-mcp/
├── src/stem_mcp/
│ ├── __init__.py # Package initialization and version
│ ├── server.py # Main MCP server implementation
│ ├── audio_processor.py # Core audio processing engine
│ ├── tools_schema.py # MCP tool definitions and schemas
│ ├── utils.py # Utility functions and helpers
│ ├── vocal_processor.py # Advanced vocal processing tools
│ └── analysis_engine.py # Audio analysis and feature extraction
├── examples/
│ ├── test_tools.py # Comprehensive tool testing script
│ ├── sample_workflows.py # Example production workflows
│ └── integration_examples.py # Logic Pro integration examples
├── tests/
│ ├── test_audio_processing.py # Audio processing tests
│ ├── test_vocal_tools.py # Vocal processing tests
│ ├── test_analysis.py # Analysis engine tests
│ └── test_integration.py # MCP integration tests
├── docs/
│ ├── API_REFERENCE.md # Detailed API documentation
│ ├── WORKFLOWS.md # Production workflow guides
│ ├── TROUBLESHOOTING.md # Common issues and solutions
│ └── PERFORMANCE_GUIDE.md # Optimization tips and benchmarks
├── pyproject.toml # Project configuration and dependencies
├── requirements.txt # Python dependencies
├── requirements-dev.txt # Development dependencies
├── .gitignore # Git ignore patterns
├── DEMO_COMPLETE.md # Complete demo and feature overview
├── README.md # This comprehensive guide
└── LICENSE # MIT License
🔧 Development Environment Setup
Quick Start
# Clone the repository
git clone <repository-url>
cd stem-mcp
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in development mode with all dependencies
pip install -e ".[dev]"
# Install additional development tools
pip install -r requirements-dev.txt
# Verify installation
stem-mcp --version
Development Dependencies
# Core development tools
pip install pytest pytest-cov black flake8 mypy pre-commit
# Audio testing tools
pip install pytest-audio librosa-test-utils
# Performance profiling
pip install memory-profiler line-profiler
# Documentation tools
pip install sphinx sphinx-rtd-theme
🚀 Running in Development Mode
Basic Development Commands
# Run server with debug logging
stem-mcp --debug --log-level DEBUG
# Run with specific configuration
stem-mcp --config config/dev_config.json
# Run with performance profiling
stem-mcp --profile --profile-output profile_results.txt
# Test all tools with sample audio
python examples/test_tools.py
# Run comprehensive test suite
pytest tests/ -v --cov=src/stem_mcp
# Run specific test categories
pytest tests/test_audio_processing.py -v
pytest tests/test_vocal_tools.py -v
Code Quality & Formatting
# Format code with Black
black src/ tests/ examples/
# Lint with flake8
flake8 src/ tests/ examples/
# Type checking with mypy
mypy src/stem_mcp/
# Run all quality checks
pre-commit run --all-files
🧪 Testing & Quality Assurance
Test Categories
- Unit Tests: Individual function and class testing
- Integration Tests: MCP client-server communication
- Audio Tests: Audio processing accuracy and quality
- Performance Tests: Speed and memory usage benchmarks
- Regression Tests: Ensure consistent outputs across versions
Running Tests
# Run all tests with coverage
pytest tests/ --cov=src/stem_mcp --cov-report=html
# Run tests with audio samples
pytest tests/ --with-audio-samples
# Run performance benchmarks
pytest tests/test_performance.py --benchmark-only
# Run memory usage tests
pytest tests/test_memory.py --memray
🔍 Debugging & Profiling
Debug Mode Features
- Detailed logging at all processing stages
- Audio processing step visualization
- Memory usage tracking
- Processing time measurements
- Model loading and caching information
Performance Profiling
# Profile CPU usage
python -m cProfile -o profile.stats examples/test_tools.py
# Profile memory usage
python -m memory_profiler examples/test_tools.py
# Profile specific functions
@profile
def my_function():
# Function code here
pass
🤝 Contributing Guidelines
Development Workflow
- Fork the repository and create your feature branch
- Set up development environment with all dependencies
- Write comprehensive tests for your changes
- Follow code style guidelines (Black, flake8, mypy)
- Update documentation for new features
- Run full test suite before submitting
- Submit pull request with detailed description
Code Style Standards
- Python: Follow PEP 8 with Black formatting
- Docstrings: Google-style docstrings for all public functions
- Type Hints: Use type hints for all function parameters and returns
- Comments: Clear, concise comments for complex logic
- Error Handling: Comprehensive error handling with informative messages
Pull Request Checklist
- ☑️ All tests pass locally
- ☑️ Code follows style guidelines
- ☑️ Documentation is updated
- ☑️ New features have tests
- ☑️ No breaking changes (or clearly documented)
- ☑️ Performance impact assessed
- ☑️ Example usage provided
Professional Workflows 🎯
🎚️ Logic Pro Integration
Seamlessly integrate with Logic Pro for enhanced music production:
Complete Production Workflow
-
🎵 Export from Logic Pro
- Export stereo mix or individual tracks
- Use 24-bit/48kHz for best quality
- Export as WAV or AIFF format
-
🤖 AI-Powered Processing
- Generate high-quality stems using Demucs
- Analyze musical content and structure
- Extract specific instruments or vocal parts
- Create seamless loops from any section
-
🎹 Import Back to Logic
- Import processed stems as individual tracks
- Use analyzed BPM data for tempo matching
- Apply extracted loops to new compositions
- Layer isolated instruments for creative arrangements
Advanced Production Techniques
🎭 Stem-Based Remixing
# 1. Generate stems from your Logic Pro export
generate_stems("/path/to/logic_export.wav", model_type="htdemucs_6s")
# 2. Analyze each stem for musical content
analyze_audio("/stems/vocals.wav")
analyze_audio("/stems/drums.wav")
# 3. Create custom loops from specific sections
create_loop("/stems/drums.wav", loop_duration=8, bpm=128)
# 4. Extract vocal harmonies for detailed editing
extract_vocal_harmonies("/stems/vocals.wav")
🎵 Vocal Production Chain
# Complete vocal processing workflow
extract_vocal("/audio/full_mix.wav", method="demucs")
separate_vocal_ranges("/vocals/extracted_vocal.wav")
extract_vocal_harmonies("/vocals/extracted_vocal.wav")
🎶 Loop Library Creation
# Create a comprehensive loop library
split_stems("/stems/drums.wav", segment_length=8, overlap=1)
create_loop("/segments/drums_segment_001.wav", loop_duration=4)
create_loop("/segments/bass_segment_002.wav", loop_duration=8)
🎼 Music Production Use Cases
🎵 For Producers
- Stem Analysis: Understand song structure and arrangement
- Remixing: Extract and manipulate individual elements
- Sample Creation: Generate unique samples from existing tracks
- Loop Building: Create custom loops for new productions
🎤 For Vocalists & Vocal Coaches
- Vocal Isolation: Extract clean vocal tracks from mixes
- Harmony Analysis: Study vocal arrangements and harmonies
- Range Training: Separate and analyze different vocal ranges
- Performance Analysis: Study vocal techniques and patterns
🎸 For Musicians
- Instrument Learning: Isolate specific instruments for practice
- Transcription: Extract clear instrument tracks for notation
- Performance Study: Analyze playing techniques and arrangements
- Cover Creation: Create backing tracks by removing specific instruments
🎧 For Audio Engineers
- Mix Analysis: Understand frequency content and arrangement
- Mastering Reference: Compare individual stems and their processing
- Problem Solving: Isolate problematic elements in complex mixes
- Quality Control: Analyze audio content and detect issues
🔀 Complete Integration Example
Scenario: Converting a Logic Pro song into stems for remixing
# Step 1: Export your Logic Pro project as a stereo mix
# File -> Export -> Audio... -> 24-bit WAV
# Step 2: Generate high-quality stems
generate_stems("/path/to/my_song.wav",
model_type="htdemucs_6s", # 6-stem separation
output_dir="./my_song_stems")
# Step 3: Analyze each stem for musical information
analyze_audio("./my_song_stems/vocals.wav")
analyze_audio("./my_song_stems/drums.wav")
analyze_audio("./my_song_stems/bass.wav")
# Step 4: Create loops from interesting drum sections
split_stems("./my_song_stems/drums.wav",
segment_length=16, # 16-second segments
overlap=2) # 2-second overlap
# Step 5: Generate seamless loops
create_loop("./segments/drums_segment_001.wav",
loop_duration=8,
crossfade_duration=0.5)
# Step 6: Process vocals for detailed editing
separate_vocal_ranges("./my_song_stems/vocals.wav")
extract_vocal_harmonies("./my_song_stems/vocals.wav")
# Step 7: Import all processed audio back to Logic Pro
# - Drag stems into Logic as individual tracks
# - Use loops in Logic's loop browser
# - Apply vocal range files for detailed vocal editing
Result: Complete stem-based workflow with:
- ✅ Individual instrument tracks
- ✅ Seamless loops ready for new compositions
- ✅ Separated vocal ranges for detailed editing
- ✅ Extracted harmonies for remix work
- ✅ Complete musical analysis data
Advanced Troubleshooting 🔧
🚫 Common Issues & Solutions
Installation Problems
"ModuleNotFoundError: No module named 'demucs'"
# Install missing dependencies
pip install demucs torch torchaudio
# For CUDA support (recommended)
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118
"FFmpeg not found"
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt update && sudo apt install ffmpeg
# Windows
choco install ffmpeg
# Or download from https://ffmpeg.org/download.html
"MCP server not recognized"
# Ensure proper installation
pip install -e .
# Verify entry point
stem-mcp --version
# Check MCP client configuration
cat ~/.config/claude-desktop/config.json
Performance Issues
"CUDA out of memory"
# Solution 1: Reduce memory usage
# Process shorter segments
split_stems("/large_file.wav", segment_length=30)
# Solution 2: Use CPU processing
# Set environment variable
export CUDA_VISIBLE_DEVICES=""
# Solution 3: Use lighter models
generate_stems("/file.wav", model_type="mdx") # Faster, less memory
# Solution 4: Clear GPU cache
import torch
torch.cuda.empty_cache()
"Slow processing speeds"
# Install CUDA-optimized PyTorch
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118
# Use optimal audio formats
# Convert to WAV 44.1kHz before processing
ffmpeg -i input.mp3 -ar 44100 -ac 2 output.wav
# Use faster models for real-time needs
generate_stems("/file.wav", model_type="mdx") # ~3x faster
"High memory usage"
# Monitor memory usage
import psutil
print(f"Memory: {psutil.virtual_memory().percent}%")
# Process in batches
for segment in split_stems("/large_file.wav", segment_length=60):
process_segment(segment)
# Memory is freed after each segment
Audio Quality Issues
"Poor separation quality"
# Use highest quality models
generate_stems("/file.wav", model_type="htdemucs_ft")
# Ensure high-quality input
# Use lossless formats (WAV, FLAC) when possible
# Avoid heavily compressed MP3s
# Pre-process audio for optimal results
ffmpeg -i input.mp3 -ar 48000 -ab 320k output.wav
"Artifacts in output"
# Check input file quality
analyze_audio("/suspicious_file.wav")
# Use different models for different content
# Electronic music: "mdx_extra"
# Live recordings: "htdemucs"
# Vocals: "htdemucs_ft"
"Loops don't sound seamless"
# Increase crossfade duration
create_loop("/drums.wav", crossfade_duration=0.5) # Longer crossfade
# Ensure tempo-aligned segments
create_loop("/drums.wav", bpm=120, loop_duration=8) # 2-bar loop at 120 BPM
# Analyze source material first
analysis = analyze_audio("/drums.wav")
print(f"Detected BPM: {analysis['tempo']}")
File Format Issues
"Unsupported audio format"
# Convert to supported format
ffmpeg -i input.m4a -ar 44100 output.wav
ffmpeg -i input.opus -ar 44100 output.wav
# Batch convert multiple files
for file in *.m4a; do
ffmpeg -i "$file" "${file%.m4a}.wav"
done
"Audio file corrupted"
# Verify file integrity
ffprobe -v error -show_entries stream=codec_name,duration -of csv=p=0 file.wav
# Repair corrupted files
ffmpeg -i corrupted.wav -c copy repaired.wav
📝 Debugging Techniques
Enable Verbose Logging
import logging
logging.basicConfig(level=logging.DEBUG)
# Or set environment variable
log_level = "DEBUG"
Audio Processing Diagnostics
# Add debugging to your workflow
result = analyze_audio("/test_file.wav")
print(f"File info: {result}")
# Check intermediate outputs
stems = generate_stems("/test.wav", output_dir="./debug_stems")
for stem in stems:
analysis = analyze_audio(stem)
print(f"{stem}: {analysis['duration']}s, {analysis['sample_rate']}Hz")
Performance Monitoring
import time
import psutil
def monitor_processing():
start_time = time.time()
start_memory = psutil.virtual_memory().used
# Your processing here
result = generate_stems("/large_file.wav")
end_time = time.time()
end_memory = psutil.virtual_memory().used
print(f"Processing time: {end_time - start_time:.2f}s")
print(f"Memory used: {(end_memory - start_memory) / 1024**2:.2f}MB")
🔍 Advanced Diagnostics
System Requirements Check
# Check Python version (3.10+ required)
python --version
# Check available memory
free -h # Linux
top -l 1 -n 0 | grep PhysMem # macOS
# Check GPU availability
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
# Check disk space
df -h
Audio System Diagnostics
import librosa
import soundfile as sf
# Test audio library functionality
try:
y, sr = librosa.load("/test.wav")
print(f"LibROSA working: {len(y)} samples at {sr}Hz")
except Exception as e:
print(f"LibROSA error: {e}")
try:
data, samplerate = sf.read("/test.wav")
print(f"SoundFile working: {len(data)} samples")
except Exception as e:
print(f"SoundFile error: {e}")
📊 Performance Optimization Guide
🚀 Hardware Recommendations
Optimal System Configuration
- CPU: Intel i7/i9 or AMD Ryzen 7/9 (8+ cores recommended)
- RAM: 32GB+ for professional use, 16GB minimum
- GPU: NVIDIA RTX 3060+ with 8GB+ VRAM (for CUDA acceleration)
- Storage: SSD for audio files (NVMe preferred for large files)
- OS: Linux or macOS for best performance, Windows 11 supported
Performance Benchmarks
| Model Type | GPU (RTX 4090) | CPU (i9-12900K) | Memory Usage |
|---|---|---|---|
| htdemucs | ~45s (3min song) | ~180s | 6GB VRAM / 8GB RAM |
| htdemucs_6s | ~60s (3min song) | ~240s | 8GB VRAM / 12GB RAM |
| mdx | ~25s (3min song) | ~90s | 4GB VRAM / 6GB RAM |
| mdx_extra | ~30s (3min song) | ~120s | 5GB VRAM / 8GB RAM |
⚡ Optimization Strategies
Model Selection Guide
# For speed (real-time applications)
generate_stems("/file.wav", model_type="mdx")
# For quality (studio production)
generate_stems("/file.wav", model_type="htdemucs_ft")
# For versatility (6 stems including piano/guitar)
generate_stems("/file.wav", model_type="htdemucs_6s")
# For balanced speed/quality
generate_stems("/file.wav", model_type="htdemucs")
Batch Processing Optimization
# Process multiple files efficiently
audio_files = ["/song1.wav", "/song2.wav", "/song3.wav"]
for audio_file in audio_files:
# Reuse loaded model for better performance
stems = generate_stems(audio_file, model_type="htdemucs")
# Process each stem
for stem in stems:
analysis = analyze_audio(stem)
# Store results for batch processing
Memory Management
import gc
import torch
def process_large_file(audio_path):
# Clear GPU cache before processing
if torch.cuda.is_available():
torch.cuda.empty_cache()
# Process in segments for large files
segments = split_stems(audio_path, segment_length=120) # 2-minute segments
results = []
for segment in segments:
result = generate_stems(segment)
results.append(result)
# Force garbage collection
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
return results
📚 Additional Resources
🎵 Music Production Resources
- Logic Pro User Guide: Apple's official documentation
- Demucs Research Paper: "Music Source Separation in the Waveform Domain"
- Audio Processing Theory: Understanding digital signal processing
- MCP Specification: Model Context Protocol documentation
🔗 Community & Support
- GitHub Issues: Report bugs and request features
- Discussions: Share workflows and get community help
- Discord: Real-time chat with other users (coming soon)
- Blog: Regular updates and tutorials (coming soon)
💰 Commercial Use
This project is open source and free for both personal and commercial use under the MIT license. For enterprise support, custom integrations, or commercial licensing inquiries, please contact the maintainers.
📄 License
MIT License
Copyright (c) 2024 Stem MCP Server Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
🙏 Acknowledgments
🎆 Core Technologies
- Demucs: State-of-the-art source separation by Meta Research
- LibROSA: Comprehensive audio analysis library
- PyTorch: Deep learning framework powering AI models
- MCP Protocol: Model Context Protocol specification
- SoundFile: Audio file I/O operations
🎵 Audio Processing Libraries
- FFmpeg: Universal audio/video processing framework
- NumPy & SciPy: Numerical computing foundations
- scikit-learn: Machine learning utilities for audio analysis
- Pydub: Simple audio manipulation toolkit
🔌 Integration Partners
- Logic Pro: Apple's professional music production software
- Claude Desktop: AI assistant with MCP support
- Music Production Community: Producers, engineers, and musicians worldwide
👥 Contributors
Thanks to all contributors who have helped make this project better:
- Core development team
- Beta testers and early adopters
- Community feedback and feature requests
- Documentation and example contributors
🏆 Special Recognition
- Meta Research: For developing and open-sourcing Demucs
- Anthropic: For creating the MCP protocol and supporting AI-audio workflows
- Apple: For Logic Pro integration possibilities
- Open Source Community: For the foundation libraries that make this possible
🎆 Project Stats
- 📋 Languages: Python (primary), Shell scripting
- 📦 Dependencies: 15+ core libraries, 50+ total with dev dependencies
- 🤖 AI Models: 5+ Demucs variants supported
- 🎵 Audio Formats: 8+ supported input/output formats
- ⚙️ Tools: 8+ MCP tools for comprehensive audio processing
- 📊 Performance: Up to 10x speed improvement with GPU acceleration
- 🌍 Platform Support: macOS, Linux, Windows
<div align="center">
🎵 Happy Music Making! 🎵
Transform your audio with AI-powered precision
Get Started | View Examples | Join Community
Built with ♥️ for music producers, audio engineers, and creative professionals
🎆 Powered by Demucs • 🤖 Enhanced by AI • 🎹 Designed for Logic Pro
</div>
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.