
ElevenLabs Text-to-Speech MCP
Integrates ElevenLabs Text-to-Speech capabilities with Cursor through the Model Context Protocol, allowing users to convert text to speech with selectable voices within the Cursor editor.
README
Project Jessica (ElevenLabs TTS MCP)
This project integrates ElevenLabs Text-to-Speech capabilities with Cursor through the Model Context Protocol (MCP). It consists of a FastAPI backend service and a React frontend application.
Features
- Text-to-Speech conversion using ElevenLabs API
- Voice selection and management
- MCP integration for Cursor
- Modern React frontend interface
- WebSocket real-time communication
- Pre-commit hooks for code quality
- Automatic code formatting and linting
Project Structure
jessica/
├── src/
│ ├── backend/ # FastAPI backend service
│ └── frontend/ # React frontend application
├── terraform/ # Infrastructure as Code
├── tests/ # Test suites
└── docs/ # Documentation
Requirements
- Python 3.11+
- Poetry (for backend dependency management)
- Node.js 18+ (for frontend)
- Cursor (for MCP integration)
Local Development Setup
Backend Setup
# Clone the repository
git clone https://github.com/georgi-io/jessica.git
cd jessica
# Create Python virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install backend dependencies
poetry install
# Configure environment
cp .env.example .env
# Edit .env with your ElevenLabs API key
# Install pre-commit hooks
poetry run pre-commit install
Frontend Setup
# Navigate to frontend directory
cd src/frontend
# Install dependencies
npm install
Development Servers
Starting the Backend
# Activate virtual environment if not active
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Start the backend
python -m src.backend
The backend provides:
- REST API: http://localhost:9020
- WebSocket: ws://localhost:9020/ws
- MCP Server: http://localhost:9020/sse (integrated with the main API server)
Starting the Frontend
# In src/frontend directory
npm run dev
Frontend development server:
- http://localhost:5173
Environment Configuration
Backend (.env)
# ElevenLabs API
ELEVENLABS_API_KEY=your-api-key
# Server Configuration
HOST=127.0.0.1
PORT=9020
# Development Settings
DEBUG=false
RELOAD=true
Frontend (.env)
VITE_API_URL=http://localhost:9020
VITE_WS_URL=ws://localhost:9020/ws
Code Quality Tools
Backend
# Run all pre-commit hooks
poetry run pre-commit run --all-files
# Run specific tools
poetry run ruff check .
poetry run ruff format .
poetry run pytest
Frontend
# Lint
npm run lint
# Type check
npm run type-check
# Test
npm run test
Production Deployment
AWS ECR and GitHub Actions Setup
To enable automatic building and pushing of Docker images to Amazon ECR:
-
Apply the Terraform configuration to create the required AWS resources:
cd terraform terraform init terraform apply
-
The GitHub Actions workflow will automatically:
- Read the necessary configuration from the Terraform state in S3
- Build the Docker image on pushes to
main
ordevelop
branches - Push the image to ECR with tags for
latest
and the specific commit SHA
-
No additional repository variables needed! The workflow fetches all required configuration from the Terraform state.
How it Works
The GitHub Actions workflow is configured to:
- Initially assume a predefined IAM role with S3 read permissions
- Fetch and extract configuration values from the Terraform state file in S3
- Re-authenticate using the actual deployment role from the state file
- Build and push the Docker image to the ECR repository defined in the state
This approach eliminates the need to manually configure GitHub repository variables and ensures that the CI/CD process always uses the current infrastructure configuration.
Quick Overview
- Frontend: Served from S3 via CloudFront at jessica.georgi.io
- Backend API: Available at api.georgi.io/jessica
- WebSocket: Connects to api.georgi.io/jessica/ws
- Docker Image: Stored in AWS ECR and can be deployed to ECS/EKS
- Infrastructure: Managed via Terraform in this repository
MCP Integration with Cursor
- Start the backend server
- In Cursor settings, add new MCP server:
- Name: Jessica TTS
- Type: SSE
- URL: http://localhost:9020/sse
Troubleshooting
Common Issues
-
API Key Issues
- Error: "Invalid API key"
- Solution: Check
.env
file
-
Connection Problems
- Error: "Cannot connect to MCP server"
- Solution: Verify backend is running and ports are correct
-
Port Conflicts
- Error: "Address already in use"
- Solution: Change ports in
.env
-
WebSocket Connection Failed
- Error: "WebSocket connection failed"
- Solution: Ensure backend is running and WebSocket URL is correct
For additional help, please open an issue on GitHub.
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.