Meshy AI MCP Server
This is a Model Context Protocol (MCP) server for interacting with the Meshy AI API. It provides tools for generating 3D models from text and images, applying textures, and remeshing models.
pasie15
README
Meshy AI MCP Server
This is a Model Context Protocol (MCP) server for interacting with the Meshy AI API. It provides tools for generating 3D models from text and images, applying textures, and remeshing models.
Features
- Generate 3D models from text prompts
- Generate 3D models from images
- Apply textures to 3D models
- Remesh and optimize 3D models
- Stream task progress in real-time
- List and retrieve tasks
- Check account balance
Installation
-
Clone this repository:
git clone https://github.com/pasie15/scenario.com-mcp-server cd meshy-ai-mcp-server
-
(Recommended) Set up a virtual environment:
Using venv:
python -m venv .venv # On Windows .\.venv\Scripts\activate # On macOS/Linux source .venv/bin/activate
Using Conda:
conda create --name meshy-mcp python=3.9 # Or your preferred Python version conda activate meshy-mcp
-
Install the MCP package:
pip install mcp
-
Install dependencies:
pip install -r requirements.txt
-
Create a
.env
file with your Meshy AI API key:cp .env.example .env # Edit .env and add your API key
Usage
Starting the Server
You can start the server directly with Python:
python src/server.py
Or using the MCP CLI:
mcp run config.json
Editor Configuration
Add this MCP server configuration to your Cline/Roo-Cline/Cursor/VS Code settings (e.g., .vscode/settings.json
or user settings):
{
"mcpServers": {
"meshy-ai": {
"command": "python",
"args": [
"path/to/your/meshy-ai-mcp-server/src/server.py" // <-- Make sure this path is correct!
],
"disabled": false,
"autoApprove": [],
"alwaysAllow": []
}
}
}
Recommended: Using MCP dev mode (starts inspector)
For development and debugging, run the server using mcp dev
:
mcp dev src/server.py
When running with mcp dev
, you'll see output like:
Starting MCP inspector...
⚙️ Proxy server listening on port 6277
🔍 MCP Inspector is up and running at http://127.0.0.1:6274 🚀
New SSE connection
You can open the inspector URL in your browser to monitor MCP communication.
Available Tools
The server provides the following tools:
Creation Tools
create_text_to_3d_task
: Generate a 3D model from a text promptcreate_image_to_3d_task
: Generate a 3D model from an imagecreate_text_to_texture_task
: Apply textures to a 3D model using text promptscreate_remesh_task
: Remesh and optimize a 3D model
Retrieval Tools
retrieve_text_to_3d_task
: Get details of a Text to 3D taskretrieve_image_to_3d_task
: Get details of an Image to 3D taskretrieve_text_to_texture_task
: Get details of a Text to Texture taskretrieve_remesh_task
: Get details of a Remesh task
Listing Tools
list_text_to_3d_tasks
: List Text to 3D taskslist_image_to_3d_tasks
: List Image to 3D taskslist_text_to_texture_tasks
: List Text to Texture taskslist_remesh_tasks
: List Remesh tasks
Streaming Tools
stream_text_to_3d_task
: Stream updates for a Text to 3D taskstream_image_to_3d_task
: Stream updates for an Image to 3D taskstream_text_to_texture_task
: Stream updates for a Text to Texture taskstream_remesh_task
: Stream updates for a Remesh task
Utility Tools
get_balance
: Check your Meshy AI account balance
Resources
The server also provides the following resources:
health://status
: Health check endpointtask://{task_type}/{task_id}
: Access task details by type and ID
Configuration
The server can be configured using environment variables:
MESHY_API_KEY
: Your Meshy AI API key (required)MCP_PORT
: Port for the MCP server to listen on (default: 8081)TASK_TIMEOUT
: Maximum time to wait for a task to complete when streaming (default: 300 seconds)
Examples
Generating a 3D Model from Text
from mcp.client import MCPClient
client = MCPClient()
result = client.use_tool(
"meshy-ai",
"create_text_to_3d_task",
{
"request": {
"mode": "preview",
"prompt": "a monster mask",
"art_style": "realistic",
"should_remesh": True
}
}
)
print(f"Task ID: {result['id']}")
Checking Task Status
from mcp.client import MCPClient
client = MCPClient()
task_id = "your-task-id"
result = client.use_tool(
"meshy-ai",
"retrieve_text_to_3d_task",
{
"task_id": task_id
}
)
print(f"Status: {result['status']}")
License
This project is licensed under the MIT License - see the LICENSE file for details.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
MCP Package Docs Server
Facilitates LLMs to efficiently access and fetch structured documentation for packages in Go, Python, and NPM, enhancing software development with multi-language support and performance optimization.
Claude Code MCP
An implementation of Claude Code as a Model Context Protocol server that enables using Claude's software engineering capabilities (code generation, editing, reviewing, and file operations) through the standardized MCP interface.
@kazuph/mcp-taskmanager
Model Context Protocol server for Task Management. This allows Claude Desktop (or any MCP client) to manage and execute tasks in a queue-based system.
Linear MCP Server
Enables interaction with Linear's API for managing issues, teams, and projects programmatically through the Model Context Protocol.
mermaid-mcp-server
A Model Context Protocol (MCP) server that converts Mermaid diagrams to PNG images.
Jira-Context-MCP
MCP server to provide Jira Tickets information to AI coding agents like Cursor

Linear MCP Server
A Model Context Protocol server that integrates with Linear's issue tracking system, allowing LLMs to create, update, search, and comment on Linear issues through natural language interactions.

Sequential Thinking MCP Server
This server facilitates structured problem-solving by breaking down complex issues into sequential steps, supporting revisions, and enabling multiple solution paths through full MCP integration.