AudioGen MCP Server
Enables users to generate sound effects from text descriptions using Meta's AudioGen model. Specifically designed for Apple Silicon Macs, it supports single and batch audio generation directly from natural language prompts.
README
AudioGen MCP Server
An MCP server that generates sound effects from text descriptions using Meta's AudioGen model. Designed for Apple Silicon Macs.
Prerequisites
- macOS with Apple Silicon (M1/M2/M3/M4)
- Python 3.9-3.11 (3.12+ not yet supported by audiocraft)
- ffmpeg:
brew install ffmpeg - ~4GB disk space for model weights
- ~8GB RAM recommended
Installation
Due to audiocraft's complex dependencies (xformers doesn't build on Apple Silicon), installation requires a specific order:
# Create virtual environment with Python 3.11
uv venv ~/.audiogen-env --python 3.11
source ~/.audiogen-env/bin/activate
# Install audiocraft without its problematic dependencies
uv pip install audiocraft --no-deps
# Install the actual dependencies (skipping xformers)
uv pip install torch torchaudio transformers huggingface_hub encodec einops \
flashy num2words sentencepiece librosa av julius spacy torchmetrics \
hydra-core hydra-colorlog demucs lameenc
# Install audiogen-mcp
uv pip install audiogen-mcp
The first run will download the AudioGen model (~2GB).
Configure Claude Code
claude mcp add audiogen ~/.audiogen-env/bin/python -- -m audiogen_mcp.server
Or add to ~/.config/claude/claude_desktop_config.json:
{
"mcpServers": {
"audiogen": {
"command": "/Users/YOUR_USERNAME/.audiogen-env/bin/python",
"args": ["-m", "audiogen_mcp.server"]
}
}
}
Available Tools
| Tool | Description |
|---|---|
generate_sound_effect |
Start a background generation job, returns job_id |
check_generation_status |
Poll job status by job_id until completed |
list_generation_jobs |
List all jobs and their current status |
list_generated_sounds |
List previously generated audio files |
get_model_status |
Check if model is loaded and device info |
How It Works
Generation runs in the background to avoid timeouts:
- Call
generate_sound_effectwith your prompt → returnsjob_id - Poll
check_generation_statuswith thejob_idevery 10-15 seconds - When status is
completed, the result includesfile_path
Example Prompts
Once configured, ask Claude Code to generate sounds:
- "Generate an explosion sound effect"
- "Create a dark ambient tension drone, 10 seconds"
- "Make a retro 8-bit power-up sound, 2 seconds long"
- "Generate footsteps on gravel, 5 seconds"
Prompt Tips
For best results, be specific:
# Good
"glass breaking, single wine glass falling on tile floor"
"8-bit arcade explosion, retro game style"
"dark ambient tension drone, synth pad, ominous low frequency rumble"
# Less good
"glass sound"
"explosion"
"ambient"
Include style, mood, and context for better results.
Performance
- ~18 seconds to generate 1 second of audio on Apple Silicon
- 5 seconds of audio ≈ 90 seconds generation time
- 10 seconds of audio ≈ 180 seconds generation time
- First generation takes longer (model loading ~5s)
- Uses Metal Performance Shaders (MPS) for GPU acceleration
Output
Generated files save to ~/audiogen_outputs/ by default as WAV or OGG files.
Troubleshooting
Installation fails with xformers error
This is expected on Apple Silicon. The server mocks xformers at runtime since it's only needed for CUDA. If audiocraft installation fails, try:
uv pip install torch torchaudio
uv pip install audiocraft --no-build-isolation
Model download fails
Ensure stable internet and sufficient disk space. The model downloads from HuggingFace Hub.
Slow generation
Check device with get_model_status tool. CPU fallback is 10-20x slower than MPS.
MPS not available
Requires macOS 12.3+ and PyTorch 2.0+.
License
MIT License - see LICENSE file.
Acknowledgments
- Meta AudioCraft - The underlying AI model
- MCP - Model Context Protocol specification
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.