YouTube to LinkedIn MCP Server
Mirror of
MCP-Mirror
README
YouTube to LinkedIn MCP Server
A Model Context Protocol (MCP) server that automates generating LinkedIn post drafts from YouTube videos. This server provides high-quality, editable content drafts based on YouTube video transcripts.
Features
- YouTube Transcript Extraction: Extract transcripts from YouTube videos using video URLs
- Transcript Summarization: Generate concise summaries of video content using OpenAI GPT
- LinkedIn Post Generation: Create professional LinkedIn post drafts with customizable tone and style
- Modular API Design: Clean FastAPI implementation with well-defined endpoints
- Containerized Deployment: Ready for deployment on Smithery
Setup Instructions
Prerequisites
- Python 3.8+
- Docker (for containerized deployment)
- OpenAI API Key
- YouTube Data API Key (optional, but recommended for better metadata)
Local Development
-
Clone the repository:
git clone <repository-url> cd yt-to-linkedin
-
Create a virtual environment and install dependencies:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt
-
Create a
.env
file in the project root with your API keys:OPENAI_API_KEY=your_openai_api_key YOUTUBE_API_KEY=your_youtube_api_key
-
Run the application:
uvicorn app.main:app --reload
-
Access the API documentation at http://localhost:8000/docs
Docker Deployment
-
Build the Docker image:
docker build -t yt-to-linkedin-mcp .
-
Run the container:
docker run -p 8000:8000 --env-file .env yt-to-linkedin-mcp
Smithery Deployment
-
Ensure you have the Smithery CLI installed and configured.
-
Deploy to Smithery:
smithery deploy
API Endpoints
1. Transcript Extraction
Endpoint: /api/v1/transcript
Method: POST
Description: Extract transcript from a YouTube video
Request Body:
{
"youtube_url": "https://www.youtube.com/watch?v=VIDEO_ID",
"language": "en",
"youtube_api_key": "your_youtube_api_key" // Optional, provide your own YouTube API key
}
Response:
{
"video_id": "VIDEO_ID",
"video_title": "Video Title",
"transcript": "Full transcript text...",
"language": "en",
"duration_seconds": 600,
"channel_name": "Channel Name",
"error": null
}
2. Transcript Summarization
Endpoint: /api/v1/summarize
Method: POST
Description: Generate a summary from a video transcript
Request Body:
{
"transcript": "Video transcript text...",
"video_title": "Video Title",
"tone": "professional",
"audience": "general",
"max_length": 250,
"min_length": 150,
"openai_api_key": "your_openai_api_key" // Optional, provide your own OpenAI API key
}
Response:
{
"summary": "Generated summary text...",
"word_count": 200,
"key_points": [
"Key point 1",
"Key point 2",
"Key point 3"
]
}
3. LinkedIn Post Generation
Endpoint: /api/v1/generate-post
Method: POST
Description: Generate a LinkedIn post from a video summary
Request Body:
{
"summary": "Video summary text...",
"video_title": "Video Title",
"video_url": "https://www.youtube.com/watch?v=VIDEO_ID",
"speaker_name": "Speaker Name",
"hashtags": ["ai", "machinelearning"],
"tone": "professional",
"voice": "first_person",
"audience": "technical",
"include_call_to_action": true,
"max_length": 1200,
"openai_api_key": "your_openai_api_key" // Optional, provide your own OpenAI API key
}
Response:
{
"post_content": "Generated LinkedIn post content...",
"character_count": 800,
"estimated_read_time": "About 1 minute",
"hashtags_used": ["#ai", "#machinelearning"]
}
4. Output Formatting
Endpoint: /api/v1/output
Method: POST
Description: Format the LinkedIn post for output
Request Body:
{
"post_content": "LinkedIn post content...",
"format": "json"
}
Response:
{
"content": {
"post_content": "LinkedIn post content...",
"character_count": 800
},
"format": "json"
}
Environment Variables
Variable | Description | Required |
---|---|---|
OPENAI_API_KEY | OpenAI API key for summarization and post generation | No (can be provided in requests) |
YOUTUBE_API_KEY | YouTube Data API key for fetching video metadata | No (can be provided in requests) |
PORT | Port to run the server on (default: 8000) | No |
Note: While environment variables for API keys are optional (as they can be provided in each request), it's recommended to set them for local development and testing. When deploying to Smithery, users will need to provide their own API keys in the requests.
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
MCP Package Docs Server
Facilitates LLMs to efficiently access and fetch structured documentation for packages in Go, Python, and NPM, enhancing software development with multi-language support and performance optimization.
Claude Code MCP
An implementation of Claude Code as a Model Context Protocol server that enables using Claude's software engineering capabilities (code generation, editing, reviewing, and file operations) through the standardized MCP interface.
@kazuph/mcp-taskmanager
Model Context Protocol server for Task Management. This allows Claude Desktop (or any MCP client) to manage and execute tasks in a queue-based system.
Linear MCP Server
Enables interaction with Linear's API for managing issues, teams, and projects programmatically through the Model Context Protocol.
mermaid-mcp-server
A Model Context Protocol (MCP) server that converts Mermaid diagrams to PNG images.
Jira-Context-MCP
MCP server to provide Jira Tickets information to AI coding agents like Cursor

Linear MCP Server
A Model Context Protocol server that integrates with Linear's issue tracking system, allowing LLMs to create, update, search, and comment on Linear issues through natural language interactions.

Sequential Thinking MCP Server
This server facilitates structured problem-solving by breaking down complex issues into sequential steps, supporting revisions, and enabling multiple solution paths through full MCP integration.