Manim MCP Server
Enables compilation and serving of Manim animations through natural language. Supports compiling Manim Python code into videos and downloading the generated animations with secure authentication.
README
Manim MCP Server
A Model Context Protocol (MCP) server for compiling and serving Manim animations.
🎯 Two Server Modes
- HTTP API Server (
app/server.py) - For REST API calls, testing, and web integration - Standard MCP Server (
mcp_server.py) - For Claude Desktop, Dify, and other MCP clients
See MCP_SETUP.md for detailed MCP configuration instructions.
A FastAPI-based MCP (Model Control Protocol) server that provides two main tools:
- Manim Compile: Compile Manim code and return a video ID
- Video Download: Download a compiled Manim video by ID
Features
- Secure authentication using JWT tokens
- LangGraph integration for workflow management
- Support for different video qualities and resolutions
- Simple API endpoints for integration
Prerequisites
- Python 3.8+
- Manim Community Edition (v0.19.0 or later)
- FFmpeg
- Required Python packages (see
requirements.txt)
Installation
-
Clone the repository:
git clone <repository-url> cd manim-mcp-server -
Create a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate -
Install the required packages:
pip install -r requirements.txt -
Install Manim and its dependencies:
pip install manim
Configuration
- Set up environment variables (create a
.envfile):SECRET_KEY=your-secret-key-here ACCESS_TOKEN_EXPIRE_MINUTES=30
Running the Server
Option 1: Using the startup script (recommended)
./start_server.sh
Option 2: Using uvicorn directly
uvicorn app.server:app --reload
The server will be available at http://localhost:8000
API Documentation
Once the server is running, you can access the interactive API documentation at:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
API Endpoints
Root
GET /- Get server information and available tools
Manim Compilation
-
POST /tools/manim_compile- Compile Manim code{ "parameters": { "code": "from manim import *\nclass Example(Scene):\n def construct(self):\n circle = Circle()\n self.play(Create(circle))", "scene_name": "Example" } }Parameters:
code(required): The Manim Python code to compilescene_name(required): Name of the specific scene class to compile
Video Download
GET /videos/{file_id}- Download a compiled video by ID
LangGraph Compatible Endpoints
GET /v1/tools- List all available toolsPOST /v1/tools/call- Call a tool (LangGraph compatible){ "tool": "manim_compile", "parameters": { "code": "from manim import *\nclass Example(Scene):\n def construct(self):\n circle = Circle()\n self.play(Create(circle))" } }
Example Usage
1. Check server status
curl http://localhost:8000/
2. Compile Manim code
curl -X 'POST' \
'http://localhost:8000/tools/manim_compile' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"parameters": {
"code": "from manim import *\nclass Example(Scene):\n def construct(self):\n circle = Circle()\n self.play(Create(circle))"
}
}'
3. Download the compiled video
# Replace VIDEO_ID with the file_id from the compile response
curl -X 'GET' \
'http://localhost:8000/videos/VIDEO_ID' \
--output output.mp4
4. Compile a specific scene by name
curl -X 'POST' \
'http://localhost:8000/tools/manim_compile' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"parameters": {
"code": "from manim import *\nclass Scene1(Scene):\n def construct(self):\n circle = Circle()\n self.play(Create(circle))\n\nclass Scene2(Scene):\n def construct(self):\n square = Square()\n self.play(Create(square))",
"scene_name": "Scene1"
}
}'
5. List available tools
curl http://localhost:8000/v1/tools
6. Run the example script
python example_usage.py
Testing
See TESTING.md for detailed testing instructions.
Quick test:
# Run tool tests (no server needed)
python test_tools.py
# Run API tests (server must be running)
python test_api.py
Security
- Always use HTTPS in production
- Consider adding authentication for production deployments
- Validate and sanitize all user inputs
- Set appropriate CORS policies for your use case
License
This project is licensed under the MIT License - see the LICENSE file for details.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.