
Video RAG MCP Server
Enables natural language search and interaction with video content through three tools: ingesting videos to a Ragie index, retrieving relevant video segments based on queries, and creating video chunks from specific timestamps.
README
Video RAG (Retrieval-Augmented Generation)
A powerful video retrieval and analysis system that uses the Ragie API to process, index, and query video content with natural language. This project enables semantic search through video content, extracting relevant video chunks based on text queries.
🎯 MCP-powered video-RAG using Ragie
This project demonstrates how to build a video-based Retrieval Augmented Generation (RAG) system powered by the Model Context Protocol (MCP). It uses Ragie's video ingestion and retrieval capabilities to enable semantic search and Q&A over video content and integrate them as MCP tools via Cursor IDE.
Tech Stack
- Ragie for video ingestion + retrieval (video-RAG)
- Cursor as the MCP host
- Model Context Protocol (MCP) for AI assistant integration
🎯 Features
- Video Processing: Upload and process video files with audio-video analysis
- Semantic Search: Query video content using natural language
- Video Chunking: Extract specific video segments based on search results
- MCP Integration: Model Context Protocol (MCP) server for AI assistant integration
- Jupyter Notebook Support: Interactive development and experimentation
- Automatic Indexing: Clear and rebuild video indexes as needed
🚀 Quick Start
Prerequisites
- Python 3.12 or higher
- Ragie API key
- Video files to process
- Cursor IDE (for MCP integration)
Setup and Installation
1. Install uv
First, let's install uv and set up our Python project and environment:
MacOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
2. Clone and Setup Project
# Clone the repository
git clone <your-repo-url>
cd video_rag
# Create virtual environment and activate it
uv venv
source .venv/bin/activate # MacOS/Linux
# OR
.venv\Scripts\activate # Windows
# Install dependencies
uv sync
3. Configure Environment Variables
Create a .env
file in the project root:
RAGIE_API_KEY=your_ragie_api_key_here
4. Add Your Video Files
Place your video files in the video/
directory.
MCP Server Setup with Cursor IDE
1. Configure MCP Server in Cursor
- Go to Cursor settings
- Select MCP Tools
- Add new global MCP server
- In the JSON configuration, add:
{
"mcpServers": {
"ragie": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/project_root",
"run",
"server.py"
],
"env": {
"RAGIE_API_KEY": "YOUR_RAGIE_API_KEY"
}
}
}
}
Note: Replace /absolute/path/to/project_root
with the actual absolute path to your project directory.
2. Connect MCP Server
- In Cursor MCP settings, make sure to toggle the button to connect the server to the host
- You should now see the MCP server listed in the MCP settings
3. Available MCP Tools
Your custom MCP server provides 3 tools:
ingest_data_tool
: Ingests the video data to the Ragie indexretrieve_data_tool
: Retrieves relevant data from the video based on user queryshow_video_tool
: Creates a short video chunk from the specified segment from the original video
You can now ingest your videos, retrieve relevant data and query it all using the Cursor Agent. The agent can even create the desired chunks from your video just with a single query!
📖 Usage
Basic Usage
Run the main script to process videos and perform queries:
python main.py
This will:
- Clear the existing index
- Ingest all videos from the
video/
directory - Perform a sample query
Interactive Development
Use the Jupyter notebook for interactive development:
jupyter notebook video_rag.ipynb
MCP Server
Start the MCP server for AI assistant integration:
python server.py
🔧 API Reference
Core Functions
clear_index()
Removes all documents from the Ragie index.
ingest_data(directory: str)
Processes and uploads all video files from the specified directory to the Ragie index.
Parameters:
directory
(str): Path to the directory containing video files
retrieve_data(query: str)
Performs semantic search on the indexed video content.
Parameters:
query
(str): Natural language query to search for in video content
Returns:
- List of dictionaries containing:
text
: The retrieved text contentdocument_name
: Name of the source video filestart_time
: Start timestamp of the video segmentend_time
: End timestamp of the video segment
chunk_video(document_name: str, start_time: float, end_time: float, directory: str = "videos")
Extracts a specific video segment and saves it as a new file.
Parameters:
document_name
(str): Name of the source video filestart_time
(float): Start time in secondsend_time
(float): End time in secondsdirectory
(str): Directory containing the source video (default: "videos")
Returns:
- Path to the created video chunk file
MCP Tools
The project includes an MCP server with the following tools:
ingest_data_tool(directory: str)
MCP wrapper for the ingest_data
function.
retrieve_data_tool(query: str)
MCP wrapper for the retrieve_data
function.
show_video_tool(document_name: str, start_time: float, end_time: float)
MCP wrapper for the chunk_video
function.
📝 Examples
Example 1: Basic Video Processing and Query
from main import clear_index, ingest_data, retrieve_data
# Clear existing index
clear_index()
# Ingest videos from directory
ingest_data("video")
# Query the video content
results = retrieve_data("What is the main topic of the video?")
print(results)
Example 2: Extract Video Chunks
from main import retrieve_data, chunk_video
# Get search results
results = retrieve_data("Show me the goal scoring moments")
# Extract video chunks for each result
for result in results:
if result['start_time'] and result['end_time']:
chunk_path = chunk_video(
result['document_name'],
result['start_time'],
result['end_time']
)
print(f"Created chunk: {chunk_path}")
Example 3: Jupyter Notebook Workflow
# Load environment and initialize Ragie
import os
from dotenv import load_dotenv
from ragie import Ragie
load_dotenv()
ragie = Ragie(auth=os.getenv('RAGIE_API_KEY'))
# Upload a video
file_path = "video/messi-goals.mp4"
result = ragie.documents.create(request={
"file": {
"file_name": "messi-goals.mp4",
"content": open(file_path, "rb"),
},
"mode": {
"video": "audio_video"
}
})
# Query the video
response = ragie.retrievals.retrieve(request={
"query": "Give detailed description of the video with timestamp of the events"
})
# Process results
for chunk in response.scored_chunks:
print(f"Time: {chunk.metadata.get('start_time')} - {chunk.metadata.get('end_time')}")
print(f"Content: {chunk.text}")
print("-" * 50)
🏗️ Project Structure
video_rag/
├── main.py # Core functionality and main script
├── server.py # MCP server implementation
├── video_rag.ipynb # Jupyter notebook for development
├── pyproject.toml # Project configuration and dependencies
├── README.md # This file
├── video/ # Directory for video files
│ └── messi-goals.mp4 # Example video file
└── video_chunks/ # Output directory for video chunks (created automatically)
🔑 Environment Variables
Variable | Description | Required |
---|---|---|
RAGIE_API_KEY |
Your Ragie API authentication key | Yes |
📦 Dependencies
- ragie: Video processing and retrieval API
- moviepy: Video editing and manipulation
- python-dotenv: Environment variable management
- mcp: Model Context Protocol implementation
- ipykernel: Jupyter notebook support
🤝 Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- Ragie for providing the video processing API
- MoviePy for video manipulation capabilities
- MCP for AI assistant integration
📞 Support
If you encounter any issues or have questions:
- Check the Issues page
- Create a new issue with detailed information
- Include your Python version, error messages, and steps to reproduce
🔄 Changelog
v0.1.0
- Initial release
- Basic video processing and retrieval functionality
- MCP server integration
- Jupyter notebook support
- Video chunking capabilities
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.