
Voice Mode
Natural voice conversations for AI assistants that brings human-like voice interactions to Claude, ChatGPT, and other LLMs through the Model Context Protocol (MCP).
README
Voice Mode
Install via:
uvx voice-mode
|pip install voice-mode
| getvoicemode.com
Natural voice conversations for AI assistants. Voice Mode brings human-like voice interactions to Claude, ChatGPT, and other LLMs through the Model Context Protocol (MCP).
🖥️ Compatibility
Runs on: Linux • macOS • Windows (WSL) | Python: 3.10+ | Tested: Ubuntu 24.04 LTS, Fedora 42
✨ Features
- 🎙️ Voice conversations with Claude - ask questions and hear responses
- 🔄 Multiple transports - local microphone or LiveKit room-based communication
- 🗣️ OpenAI-compatible - works with any STT/TTS service (local or cloud)
- ⚡ Real-time - low-latency voice interactions with automatic transport selection
- 🔧 MCP Integration - seamless with Claude Desktop and other MCP clients
🎯 Simple Requirements
All you need to get started:
- 🔑 OpenAI API Key (or compatible service) - for speech-to-text and text-to-speech
- 🎤 Computer with microphone and speakers OR ☁️ LiveKit server (LiveKit Cloud or self-hosted)
Quick Start
claude mcp add --scope user voice-mode uvx voice-mode
export OPENAI_API_KEY=your-openai-key
claude
> /converse
🎬 Demo
Watch Voice Mode in action:
Example Usage
Once configured, try these prompts with Claude:
"Let's have a voice conversation"
"Ask me about my day using voice"
"Tell me a joke"
(Claude will speak and wait for your response)"Say goodbye"
(Claude will speak without waiting)
The new converse
function makes voice interactions more natural - it automatically waits for your response by default.
Installation
Prerequisites
- Python >= 3.10
- OpenAI API Key (or compatible service)
Quick Install
# Using Claude Code (recommended)
claude mcp add --scope user voice-mode uvx voice-mode
# Using UV
uvx voice-mode
# Using pip
pip install voice-mode
Manual Configuration for Different Clients
<details> <summary><strong>Claude Code (CLI)</strong></summary>
claude mcp add voice-mode -- uvx voice-mode
Or with environment variables:
claude mcp add voice-mode --env OPENAI_API_KEY=your-openai-key -- uvx voice-mode
</details>
<details> <summary><strong>Claude Desktop</strong></summary>
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"voice-mode": {
"command": "uvx",
"args": ["voice-mode"],
"env": {
"OPENAI_API_KEY": "your-openai-key"
}
}
}
}
</details>
<details> <summary><strong>Cursor</strong></summary>
Add to ~/.cursor/mcp.json
:
{
"mcpServers": {
"voice-mode": {
"command": "uvx",
"args": ["voice-mode"],
"env": {
"OPENAI_API_KEY": "your-openai-key"
}
}
}
}
</details>
<details> <summary><strong>VS Code</strong></summary>
Add to your VS Code MCP config:
{
"mcpServers": {
"voice-mode": {
"command": "uvx",
"args": ["voice-mode"],
"env": {
"OPENAI_API_KEY": "your-openai-key"
}
}
}
}
</details>
<details> <summary><strong>Windsurf</strong></summary>
{
"mcpServers": {
"voice-mode": {
"command": "uvx",
"args": ["voice-mode"],
"env": {
"OPENAI_API_KEY": "your-openai-key"
}
}
}
}
</details>
<details> <summary><strong>Zed</strong></summary>
Add to your Zed settings.json:
{
"context_servers": {
"voice-mode": {
"command": {
"path": "uvx",
"args": ["voice-mode"],
"env": {
"OPENAI_API_KEY": "your-openai-key"
}
}
}
}
}
</details>
Alternative Installation Options
<details> <summary><strong>Using Docker</strong></summary>
docker run -it --rm \
-e OPENAI_API_KEY=your-openai-key \
--device /dev/snd \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
ghcr.io/mbailey/voicemode:latest
</details>
<details> <summary><strong>Using pipx</strong></summary>
pipx install voice-mode
</details>
<details> <summary><strong>From source</strong></summary>
git clone https://github.com/mbailey/voicemode.git
cd voicemode
pip install -e .
</details>
Tools
Tool | Description | Key Parameters |
---|---|---|
converse |
Have a voice conversation - speak and optionally listen | message , wait_for_response (default: true), listen_duration (default: 10s), transport (auto/local/livekit) |
listen_for_speech |
Listen for speech and convert to text | duration (default: 5s) |
check_room_status |
Check LiveKit room status and participants | None |
check_audio_devices |
List available audio input/output devices | None |
start_kokoro |
Start the Kokoro TTS service | models_dir (optional, defaults to ~/Models/kokoro) |
stop_kokoro |
Stop the Kokoro TTS service | None |
kokoro_status |
Check the status of Kokoro TTS service | None |
Note: The converse
tool is the primary interface for voice interactions, combining speaking and listening in a natural flow.
Configuration
📖 See docs/configuration.md for complete setup instructions for all MCP hosts
📁 Ready-to-use config files in config-examples/
Quick Setup
The only required configuration is your OpenAI API key:
export OPENAI_API_KEY="your-key"
Optional Settings
# Custom STT/TTS services (OpenAI-compatible)
export STT_BASE_URL="http://localhost:2022/v1" # Local Whisper
export TTS_BASE_URL="http://localhost:8880/v1" # Local TTS
export TTS_VOICE="alloy" # Voice selection
# LiveKit (for room-based communication)
# See docs/livekit/ for setup guide
export LIVEKIT_URL="wss://your-app.livekit.cloud"
export LIVEKIT_API_KEY="your-api-key"
export LIVEKIT_API_SECRET="your-api-secret"
# Debug mode
export VOICE_MCP_DEBUG="true"
# Save all audio (TTS output and STT input)
export VOICE_MCP_SAVE_AUDIO="true"
Local STT/TTS Services
For privacy-focused or offline usage, Voice Mode supports local speech services:
- Whisper.cpp - Local speech-to-text with OpenAI-compatible API
- Kokoro - Local text-to-speech with multiple voice options
These services provide the same API interface as OpenAI, allowing seamless switching between cloud and local processing.
OpenAI API Compatibility Benefits
By strictly adhering to OpenAI's API standard, Voice Mode enables powerful deployment flexibility:
- 🔀 Transparent Routing: Users can implement their own API proxies or gateways outside of Voice Mode to route requests to different providers based on custom logic (cost, latency, availability, etc.)
- 🎯 Model Selection: Deploy routing layers that select optimal models per request without modifying Voice Mode configuration
- 💰 Cost Optimization: Build intelligent routers that balance between expensive cloud APIs and free local models
- 🔧 No Lock-in: Switch providers by simply changing the
BASE_URL
- no code changes required
Example: Simply set OPENAI_BASE_URL
to point to your custom router:
export OPENAI_BASE_URL="https://router.example.com/v1"
export OPENAI_API_KEY="your-key"
# Voice Mode now uses your router for all OpenAI API calls
The OpenAI SDK handles this automatically - no Voice Mode configuration needed!
Architecture
┌─────────────────────┐ ┌──────────────────┐ ┌─────────────────────┐
│ Claude/LLM │ │ LiveKit Server │ │ Voice Frontend │
│ (MCP Client) │◄────►│ (Optional) │◄───►│ (Optional) │
└─────────────────────┘ └──────────────────┘ └─────────────────────┘
│ │
│ │
▼ ▼
┌─────────────────────┐ ┌──────────────────┐
│ Voice MCP Server │ │ Audio Services │
│ • converse │ │ • OpenAI APIs │
│ • listen_for_speech│◄───►│ • Local Whisper │
│ • check_room_status│ │ • Local TTS │
│ • check_audio_devices └──────────────────┘
└─────────────────────┘
Troubleshooting
Common Issues
- No microphone access: Check system permissions for terminal/application
- UV not found: Install with
curl -LsSf https://astral.sh/uv/install.sh | sh
- OpenAI API error: Verify your
OPENAI_API_KEY
is set correctly - No audio output: Check system audio settings and available devices
Debug Mode
Enable detailed logging and audio file saving:
export VOICE_MCP_DEBUG=true
Debug audio files are saved to: ~/voice-mcp_recordings/
Audio Saving
To save all audio files (both TTS output and STT input):
export VOICE_MCP_SAVE_AUDIO=true
Audio files are saved to: ~/voice-mcp_audio/
with timestamps in the filename.
Links
- Website: getvoicemode.com
- GitHub: github.com/mbailey/voicemode
- PyPI: pypi.org/project/voice-mcp
- npm: npmjs.com/package/voicemode
Community
- Discord: Join our community
- Twitter/X: @getvoicemode
- YouTube: @getvoicemode
License
MIT - A Failmode Project
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.