IRCAM Amplify MCP Server
Enables LLMs to analyze music (genre, mood, tempo, key), separate audio stems, detect AI-generated music, and measure loudness using IRCAM Amplify's audio processing APIs.
README
IRCAM Amplify MCP Server
MCP (Model Context Protocol) server for IRCAM Amplify audio processing APIs. Enables any MCP-compatible LLM to analyze music, separate stems, detect AI-generated audio, and more.
Features
- Music Analysis: Extract genre, mood, tempo, key, and instruments from audio
- Stem Separation: Split audio into vocals, drums, bass, and other instruments
- AI Detection: Detect whether music is AI-generated or human-made
- Loudness Analysis: Measure LUFS, true peak, and dynamic range
- Async Job Handling: Poll long-running operations with progress tracking
Supported Audio Formats
MP3, WAV, FLAC, OGG, M4A (max 100MB)
Quick Start
Prerequisites
- Node.js 18+ (download)
- IRCAM Amplify API Key from app.ircamamplify.io
- An MCP-compatible client (Claude Desktop, Cline, etc.)
Installation
npm install -g ircam-amplify-mcp
Or run directly with npx:
npx ircam-amplify-mcp
Configuration
1. Set your API key
export IRCAM_AMPLIFY_API_KEY="your-api-key-here"
2. Configure your MCP client
Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"ircam-amplify": {
"command": "npx",
"args": ["ircam-amplify-mcp"],
"env": {
"IRCAM_AMPLIFY_API_KEY": "your-api-key-here"
}
}
}
}
Available Tools
| Tool | Description | Input | Output |
|---|---|---|---|
analyze_music |
Extract genre, mood, tempo, key, instruments | audio_url |
{ genre[], mood[], tempo, key, instruments[] } |
separate_stems |
Split into vocals, drums, bass, other | audio_url |
{ vocals_url, drums_url, bass_url, other_url } or { job_id } |
detect_ai_music |
Detect AI vs human-made music | audio_url |
{ confidence, classification } |
analyze_loudness |
Measure LUFS, peak, dynamic range | audio_url |
{ integrated_lufs, true_peak_db, loudness_range } |
check_job_status |
Poll async operations | job_id |
{ status, progress, result } |
Usage Examples
Analyze a song
"Analyze this song: https://example.com/song.mp3"
Response:
{
"genre": ["electronic", "house"],
"mood": ["energetic", "uplifting"],
"tempo": 128,
"key": "A minor",
"instruments": ["synthesizer", "drums", "bass"]
}
Separate stems
"Separate the vocals from this track: https://example.com/track.mp3"
Response (sync for short files):
{
"vocals_url": "https://cdn.ircamamplify.io/stems/vocals.wav",
"drums_url": "https://cdn.ircamamplify.io/stems/drums.wav",
"bass_url": "https://cdn.ircamamplify.io/stems/bass.wav",
"other_url": "https://cdn.ircamamplify.io/stems/other.wav"
}
Response (async for longer files):
{
"job_id": "abc123-def456"
}
Check if AI-generated
"Is this track AI-generated? https://example.com/mystery.mp3"
Response:
{
"confidence": 85,
"classification": "ai_generated"
}
Classification values: ai_generated, human_made, or uncertain
Analyze loudness
"Check if this master is ready for Spotify: https://example.com/master.wav"
Response:
{
"integrated_lufs": -14.0,
"true_peak_db": -1.0,
"loudness_range": 6.0
}
Check job status
"Check the status of job abc123-def456"
Response:
{
"status": "completed",
"progress": 100,
"result": {
"vocals_url": "...",
"drums_url": "...",
"bass_url": "...",
"other_url": "..."
}
}
Status values: pending, processing, completed, failed
Error Handling
The server provides detailed error messages with actionable suggestions:
| Error Code | Meaning | Suggestion |
|---|---|---|
MISSING_API_KEY |
API key not configured | Set IRCAM_AMPLIFY_API_KEY environment variable |
INVALID_API_KEY |
API key rejected | Verify key at app.ircamamplify.io |
INVALID_URL |
Cannot access audio URL | Ensure URL is publicly accessible |
UNSUPPORTED_FORMAT |
Audio format not supported | Use MP3, WAV, FLAC, OGG, or M4A |
FILE_TOO_LARGE |
File exceeds 100MB limit | Use a shorter audio clip |
RATE_LIMITED |
Too many requests | Wait and retry |
JOB_NOT_FOUND |
Job ID invalid or expired | Job results expire after 24 hours |
Development
# Install dependencies
npm install
# Run in development mode
npm run dev
# Build for production
npm run build
# Run tests
npm test
# Type check
npm run typecheck
# Lint and format
npm run lint
npm run format
Architecture
src/
├── index.ts # MCP server entry point
├── types/
│ ├── mcp-tools.ts # MCP tool type definitions
│ └── ircam-api.ts # IRCAM API response types
├── tools/
│ ├── analyze-music.ts # Music tagging tool
│ ├── separate-stems.ts # Stem separation tool
│ ├── detect-ai-music.ts # AI detection tool
│ ├── analyze-loudness.ts # Loudness analysis tool
│ └── check-job-status.ts # Job polling tool
└── utils/
├── auth.ts # API key management
├── http.ts # HTTP client with retry
├── validation.ts # Input validation
└── errors.ts # Error formatting
License
MIT - See LICENSE for details.
Support
- IRCAM Documentation: docs.ircamamplify.io
- Get API Key: app.ircamamplify.io
- Issues: GitHub Issues
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.