YTPipe

YTPipe

Transforms YouTube videos into LLM-ready knowledge bases through transcription, semantic chunking, and vector embedding services. It provides 12 specialized MCP tools for video processing, semantic search, and SEO intelligence analysis.

Category
Visit Server

README

<div align="center">

YTPipe Banner

🎬 YTPipe - AI-Native YouTube Processing Pipeline

Python 3.8+ License: MIT MCP Compatible Code style: black

Transform YouTube videos into LLM-ready knowledge bases with a production-ready MCP backend.

Quick StartFeaturesDocumentationMCP Tools

</div>

✨ Features

  • 🤖 MCP Integration - 12 AI-callable tools for seamless agent integration
  • 🎯 Smart Chunking - Semantic text chunking with timeline timestamps
  • 🧠 Vector Embeddings - 384-dimensional embeddings for semantic search
  • 🔍 Full-Text Search - Context-aware transcript search
  • 📊 SEO Intelligence - AI-powered title, tag, and description optimization
  • ⏱️ Timeline Analysis - Topic evolution and keyword density tracking
  • 🏗️ Microservices - 11 independent, composable services
  • 🔐 Type-Safe - Pydantic models throughout
  • Async-First - Non-blocking I/O operations
  • 🗄️ Multi-Backend - ChromaDB, FAISS, Qdrant support

🚀 Quick Start

# Install
git clone https://github.com/leolech14/ytpipe.git
cd ytpipe
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# Process a video
ytpipe "https://youtube.com/watch?v=dQw4w9WgXcQ"

Result: Metadata + Transcript + Semantic Chunks + Embeddings + Vector Storage


🎯 Usage Examples

MCP Server (AI Agents)

python -m ytpipe.mcp.server

Then from Claude Code:

"Process this video: https://youtube.com/watch?v=VIDEO_ID"
"Search video dQw4w9WgXcQ for 'machine learning'"
"Optimize SEO for video dQw4w9WgXcQ"

CLI (Humans)

# Basic
ytpipe "https://youtube.com/watch?v=VIDEO_ID"

# Advanced
ytpipe URL --backend faiss --whisper-model large --verbose

Python API (Developers)

from ytpipe.core.pipeline import Pipeline

pipeline = Pipeline(output_dir="./output")
result = await pipeline.process(url)

print(f"✅ {result.metadata.title}")
print(f"   Chunks: {len(result.chunks)}")
print(f"   Time: {result.processing_time:.1f}s")

📋 MCP Tools

Pipeline (4 tools)

  • ytpipe_process_video - Full pipeline
  • ytpipe_download - Download only
  • ytpipe_transcribe - Transcribe audio
  • ytpipe_embed - Generate embeddings

Query (4 tools)

  • ytpipe_search - Full-text search
  • ytpipe_find_similar - Semantic search
  • ytpipe_get_chunk - Get chunk by ID
  • ytpipe_get_metadata - Get video info

Analytics (4 tools)

  • ytpipe_seo_optimize - SEO recommendations
  • ytpipe_quality_report - Quality metrics
  • ytpipe_topic_timeline - Topic evolution
  • ytpipe_benchmark - Performance analysis

🏗️ Architecture

MCP Server (12 tools) → Pipeline Orchestrator → 11 Services → Pydantic Models

Services:

  • Extractors (2): Download, Transcriber
  • Processors (4): Chunker, Embedder, VectorStore, Docling
  • Intelligence (4): Search, SEO, Timeline, Analyzer
  • Exporters (1): Dashboard

8 Processing Phases:

  1. Download → 2. Transcription → 3. Chunking → 4. Embeddings →
  2. Export → 6. Dashboard → 7. Docling → 8. Vector Storage

📊 Performance

Metric Value
Processing Speed 4-13x real-time
Memory Usage <2GB peak
Chunk Quality 85%+ high quality
Embedding Dimension 384

🔧 Requirements

  • Python 3.8+
  • FFmpeg (for audio extraction)
  • 4GB+ RAM recommended
  • GPU optional (CUDA for acceleration)

📖 Documentation


🤝 Contributing

Contributions welcome! Please read CONTRIBUTING.md first.


📝 License

MIT License - see LICENSE for details.


🙏 Credits

Built with:


📧 Contact

Leonardo Lech

  • Email: leonardo.lech@gmail.com
  • GitHub: @leolech14

<div align="center">

⭐ Star this repo if you find it useful!

Transform YouTube → Knowledge Base in seconds

</div>

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured