VideoDB Director

VideoDB Director

VideoDB Director

Category
Visit Server

Tools

doc_assistant

Context for creating video applications using VideoDB

code_assistant

Will give you data related to VideoDB SDK which allows developers to use videodb in python. IMPORTANT: Whenever user wants to write code related to videos, youtube videos or VideoDB specifically, always call this tool.

play_video

Play the video of the given stream link

call_director

The Director tool orchestrates specialized agents within the VideoDB server, efficiently handling multimedia and video-related queries. Clients should send queries that Director can interpret clearly, specifying tasks in natural language. Director will then delegate these queries to appropriate agents for optimized results, utilizing defaults and contextual information if explicit parameters are not provided. Director handles queries such as: - Uploading & Downloading: - Upload media from URLs or local paths (supported media: video, audio, image) - Download the VideoDB generated video streams. - Indexing & Search: - Index spoken words or scenes in videos (spoken_words, scene indexing; scene indexing supports shot or time-based type) - Search VideoDB collections semantically or by keyword (semantic, keyword search; indexing types: spoken_word, scene) - Summarization & Subtitles: - Summarize video content based on custom prompts - Add subtitles in various languages - Dubbing: - Dub videos into target languages - Creating Videos: - Generate videos using specific models or engines (Fal, StabilityAI; job types: text_to_video, image_to_video) - Compare multiple video generation models (video_generation_comparison) - Audio Generation & Editing: - Generate speech, sound effects, or background music (engines: ElevenLabs for speech/sound effects, Beatoven for music) - Clone voices from audio sources or overlay cloned voices onto videos - Censor the video on given prompt - Image and Frame Generation: - Generate static image frames from videos at specified timestamps - Create or enhance images using GenAI models (job types: text_to_image, image_to_image using Fal, Replicate) - Video Editing & Clip Generation: - Edit or combine multiple videos and audio files - Generate targeted video clips from user prompts - Streaming & Web Search: - Stream videos by video ID or URL - Search for relevant online videos (engine: Serp) - Transcription: - Generate transcripts for videos - Pricing & Usage Information: - Provide detailed cost information and usage estimates Clients should provide queries clearly aligned with Director's capabilities, allowing Director to use contextual defaults when explicit parameters like IDs or collection details are not specified. IMPORTANT: if you have a previous response of this method with an appropriate session_id, please provide that session_id in the next request to continue the conversation. IMPORTANT: It is MANDATORY to send the `session_id` param if any earlier response from this method exists with a `session_id` in its output

README

Latest Number GitHub tag (latest SemVer) Stars Issues

<!-- PROJECT LOGO --> <br /> <p align="center"> <a href="https://videodb.io/"> <img src="https://codaio.imgix.net/docs/_s5lUnUCIU/blobs/bl-RgjcFrrJjj/d3cbc44f8584ecd42f2a97d981a144dce6a66d83ddd5864f723b7808c7d1dfbc25034f2f25e1b2188e78f78f37bcb79d3c34ca937cbb08ca8b3da1526c29da9a897ab38eb39d084fd715028b7cc60eb595c68ecfa6fa0bb125ec2b09da65664a4f172c2f" alt="Logo" width="300" height=""> </a>

<h3 align="center">VideoDB Agent Toolkit</h3>

<p align="center"> AI Agent toolkit for VideoDB <br /> <a href="https://videodb.io/llms.txt"><strong>llms.txt >></strong></a> <a href="https://videodb.io/llms-full.txt"><strong>llms-full.txt</strong></a> <br /> <a href="https://videodb.io/mcp"><strong>MCP</strong></a> <br /> </p> </p>

VideoDB Agent Toolkit

The VideoDB Agent Toolkit exposes VideoDB context to LLMs and agents. It enables integration to AI-driven IDEs like Cursor, chat agents like Claude Code etc. This toolkit automates context generation, maintenance, and discoverability. It auto-syncs SDK versions, docs, and examples and is distributed through MCP and llms.txt

🚀 Quick Overview

The toolkit offers context files designed for use with LLMs, structured around key components:

llms-full.txt — Comprehensive context for deep integration.

llms.txt — Lightweight metadata for quick discovery.

MCP (Model Context Protocol) — A standardized protocol.

These components leverage automated workflows to ensure your AI applications always operate with accurate, up-to-date context.

📦 Toolkit Components

1. llms-full.txt (View »)


llms-full.txt consolidates everything your LLM agent needs, including:

  • Comprehensive VideoDB overview.

  • Complete SDK usage instructions and documentation.

  • Detailed integration examples and best practices.

Real-world Examples:

2. llms.txt (View »)


A streamlined file following the Answer.AI llms.txt proposal. Ideal for quick metadata exposure and LLM discovery.

ℹ️ Recommendation: Use llms.txt for lightweight discovery and metadata integration. Use llms-full.txt for complete functionality.

3. MCP (Model Context Protocol)

The VideoDB MCP Server connects with the Director backend framework, providing a single tool for many workflows. For development, it can be installed and used via uvx for isolated environments. For more details on MCPs, please visit here

Install uv

We need to install uv first.

For macOS/Linux:

curl -LsSf https://astral.sh/uv/install.sh | sh

For Windows:

powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

You can also visit the installation steps of uv for more details here

Run the MCP Server

You can run the MCP server using uvx using the following command

uvx videodb-director-mcp --api-key=VIDEODB_API_KEY

Update VideoDB Director MCP package

To ensure you're using the latest version of the MCP server with uvx, start by clearing the cache:

uv cache clean

This command removes any outdated cached packages of videodb-director-mcp, allowing uvx to fetch the most recent version.

If you always want to use the latest version of the MCP server, update your command as follows:

uvx videodb-director-mcp@latest --api-key=<VIDEODB_API_KEY>

<br/>

🧠 Anatomy of LLM Context Files

LLM context files in VideoDB are modular, automatically generated, and continuously updated from multiple sources:

🧩 Modular Structure:

  • Instructions — Best practices and prompt guidelines View »

  • SDK Context — SDK structure, classes, and interface definitions View »

  • Docs Context — Summarized product documentation View »

  • Examples Context — Real-world notebook examples View » <img src="./token_breakdown.png" alt="Token Breakdown" width="400"/>

Automated Maintenance:

  • Managed through GitHub Actions for automated updates.
  • Triggered by changes to SDK repositories, documentation, or examples.
  • Maintained centrally via a config.yaml file.

🛠️ Automation with GitHub Actions

Automatic context generation ensures your applications always have the latest information:

🔹 SDK Context Workflow (View)

  • Automatically generates documentation from SDK repo updates.
  • Uses Sphinx for Python SDKs.

🔹 Docs Context Workflow (View)

  • Scrapes and summarizes documentation using FireCrawl and LLM-powered summarization.

🔹 Examples Context Workflow (View)

  • Converts and summarizes notebooks into practical context examples.

🔹 Master Context Workflow (View)

  • Combines all sub-components into unified llms-full.txt.
  • Generates standards-compliant llms.txt.
  • Updates documentation with token statistics for transparency.

🛠️ Customization via config.yaml

The config.yaml file centralizes all configurations, allowing easy customization:

  • Inclusion & Exclusion Patterns for documentation and notebook processing
  • Custom LLM Prompts for precise summarization tailored to each document type
  • Layout Configuration for combining context components seamlessly

config.yaml > llms_full_txt_file defines how llms-full.txt is assembled:

llms_full_txt_file:
  input_files:
    - name: Instructions
      file_path: "context/instructions/prompt.md"
    - name: SDK Context
      file_path: "context/sdk/context/index.md"
    - name: Docs Context
      file_path: "context/docs/docs_context.md"
    - name: Examples Context
      file_path: "context/examples/examples_context.md"
  output_files:
    - name: llms_full_txt
      file_path: "context/llms-full.txt"
    - name: llms_full_md
      file_path: "context/llms-full.md"
  layout: |
    {{FILE1}}

    {{FILE2}}

    {{FILE3}}

    {{FILE4}}

💡 Best Practices for Context-Driven Development

  • Automate Context Updates: Leverage GitHub Actions to maintain accuracy.
  • Tailored Summaries: Use custom LLM prompts to ensure context relevance.
  • Seamless Integration: Continuously integrate with existing LLM agents or IDEs.

By following these practices, you ensure your AI applications have reliable, relevant, and up-to-date context—critical for effective agent performance and developer productivity.


🚀 Get Started

Clone the toolkit repository and follow the setup instructions in config.yaml to start integrating VideoDB contexts into your LLM-powered applications today.

Explore further:


<!-- MARKDOWN LINKS & IMAGES --> <!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured