Whisper Speech Recognition MCP Server
Enables high-performance audio transcription using Faster Whisper with CUDA acceleration, supporting single and batch audio file processing with multiple output formats (VTT, SRT, JSON).
README
Whisper Speech Recognition MCP Server
中文文档
A high-performance speech recognition MCP server based on Faster Whisper, providing efficient audio transcription capabilities.
Features
- Integrated with Faster Whisper for efficient speech recognition
- Batch processing acceleration for improved transcription speed
- Automatic CUDA acceleration (if available)
- Support for multiple model sizes (tiny to large-v3)
- Output formats include VTT subtitles, SRT, and JSON
- Support for batch transcription of audio files in a folder
- Model instance caching to avoid repeated loading
- Dynamic batch size adjustment based on GPU memory
Installation
Dependencies
- Python 3.10+
- faster-whisper>=0.9.0
- torch==2.6.0+cu126
- torchaudio==2.6.0+cu126
- mcp[cli]>=1.2.0
Installation Steps
- Clone or download this repository
- Create and activate a virtual environment (recommended)
- Install dependencies:
pip install -r requirements.txt
PyTorch Installation Guide
Install the appropriate version of PyTorch based on your CUDA version:
-
CUDA 12.6:
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126 -
CUDA 12.1:
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121 -
CPU version:
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cpu
You can check your CUDA version with nvcc --version or nvidia-smi.
Usage
Starting the Server
On Windows, simply run start_server.bat.
On other platforms, run:
python whisper_server.py
Configuring Claude Desktop
-
Open the Claude Desktop configuration file:
- Windows:
%APPDATA%\Claude\claude_desktop_config.json - macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
-
Add the Whisper server configuration:
{
"mcpServers": {
"whisper": {
"command": "python",
"args": ["D:/path/to/whisper_server.py"],
"env": {}
}
}
}
- Restart Claude Desktop
Available Tools
The server provides the following tools:
- get_model_info - Get information about available Whisper models
- transcribe - Transcribe a single audio file
- batch_transcribe - Batch transcribe audio files in a folder
Performance Optimization Tips
- Using CUDA acceleration significantly improves transcription speed
- Batch processing mode is more efficient for large numbers of short audio files
- Batch size is automatically adjusted based on GPU memory size
- Using VAD (Voice Activity Detection) filtering improves accuracy for long audio
- Specifying the correct language can improve transcription quality
Local Testing Methods
- Use MCP Inspector for quick testing:
mcp dev whisper_server.py
-
Use Claude Desktop for integration testing
-
Use command line direct invocation (requires mcp[cli]):
mcp run whisper_server.py
Error Handling
The server implements the following error handling mechanisms:
- Audio file existence check
- Model loading failure handling
- Transcription process exception catching
- GPU memory management
- Batch processing parameter adaptive adjustment
Project Structure
whisper_server.py: Main server codemodel_manager.py: Whisper model loading and cachingaudio_processor.py: Audio file validation and preprocessingformatters.py: Output formatting (VTT, SRT, JSON)transcriber.py: Core transcription logicstart_server.bat: Windows startup script
License
MIT
Acknowledgements
This project was developed with the assistance of these amazing AI tools and models:
- GitHub Copilot - AI pair programmer
- Trae - Agentic AI coding assistant
- Cline - AI-powered terminal
- DeepSeek - Advanced AI model
- Claude-3.7-Sonnet - Anthropic's powerful AI assistant
- Gemini-2.0-Flash - Google's multimodal AI model
- VS Code - Powerful code editor
- Whisper - OpenAI's speech recognition model
- Faster Whisper - Optimized Whisper implementation
Special thanks to these incredible tools and the teams behind them.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.