RocketRide

RocketRide

RocketRide is a self-hosted, open-source AI pipeline platform (MIT) that exposes pipelines as MCP tools. Supports Claude, OpenAI, Gemini, and xAI. Pipelines connect via WebSocket and are made available to MCP clients like Claude Desktop, Cursor, and Windsurf.

Category
Visit Server

README

<p align="center"> <img src="./images/header.jpeg" alt="Header"> </p>

<p align="center"> <a href="https://github.com/rocketride-org/rocketride-server/actions/workflows/ci.yml"><img src="https://github.com/rocketride-org/rocketride-server/actions/workflows/ci.yml/badge.svg" alt="CI"></a> <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a> <a href="https://nodejs.org/"><img src="https://img.shields.io/badge/Node.js-18%2B-green.svg" alt="Node.js 18+"></a> <a href="https://discord.gg/9hr3tdZmEG"><img src="https://img.shields.io/badge/Discord-Join%20us-5865F2.svg" alt="Discord"></a> </p>

RocketRide is a high-performance data processing engine built on a C++ core with a Python-extensible node system. With 50+ pipeline nodes, native AI/ML support, and SDKs for TypeScript, Python, and MCP, it lets you process, transform, and analyze data at scale — entirely on your own infrastructure.

Key Capabilities

  • Stay in your IDE — Build, debug, test, and scale heavy AI and data workloads with an intuitive visual builder in the environment you're used to. Stop using your browser.
  • High-performance C++ engine — Native multithreading. No bottleneck. Purpose-built for throughput, not prototypes.
  • Multi-agent workflows — Orchestrate and scale agents with built-in support for CrewAI and LangChain.
  • 50+ pipeline nodes — Python-extensible, with 13 LLM providers, 8 vector databases, OCR, NER, PII anonymization, and more.
  • TypeScript, Python & MCP SDKs — Integrate pipelines into native applications or expose them as tools for AI assistants.
  • One-click deploy — Run on Docker, on-prem, or RocketRide Cloud (👀coming soon). Our architecture is made for production, not demos.

⚡ Quick Start

  1. Install the extension for your IDE. Search for RocketRide in the extension marketplace:

    <p align="center"> <img src="./images/install.png" alt="Install RocketRide extension"> </p>

    <sub>Not seeing your IDE? Open an issue · Download directly</sub>

  2. Click the RocketRide (🚀) extension in your IDE

  3. Deploy a server — you'll be prompted on how you want to run the server. Choose the option that fits your setup:

    • Local (Recommended) — This pulls the server directly into your IDE without any additional setup.
    • On-Premises — Run the server on your own hardware for full control and data residency. Pull the image and deploy to Docker or clone this repo and build from source.
    • RocketRide Cloud (👀coming soon) — Managed hosting with our proprietary model server. No infrastructure to maintain.
  4. Create a .pipe file and start building

🔧 Building your first pipe

  1. All pipelines are recognized with the *.pipe format. Each pipeline and configuration is a JSON object - but the extension in your IDE will render within our visual builder canvas.

  2. All pipelines begin with source node: webhook, chat, or dropper. For specific usage, examples, and inspiration 💡 on how to build pipelines, check out our guides and documentation

  3. Connect input lanes and output lanes by type to properly wire your pipeline. Some nodes like agents or LLMs can be invoked as tools for use by a parent node as shown below:

<p align="center"> <img src="./images/agent_pipeline.png" alt="Pipeline canvas example"> </p>

  1. You can run a pipeline from the canvas by pressing the ▶️ button on the source node or from the Connection Manager directly.

  2. View all available and running pipelines below the Connection Manager. Selecting running pipelines allows for in depth analytics. Trace call trees, token usage, memory consumption, and more to optimize your pipelines before scaling and deploying.

  3. 📦 Deploy your pipelines to RocketRide.ai cloud or run them on your own infrastructure.

    • Docker — Download the RocketRide server image and create a container. Requires Docker to be installed.

      docker pull ghcr.io/rocketride-org/rocketride-engine:latest
      docker create --name rocketride-engine -p 5565:5565 ghcr.io/rocketride-org/rocketride-engine:latest
      
    • RocketRide Cloud (👀coming soon) — Managed hosting with our proprietary model server and batched processing. The cheapest option to run AI workflows and pipelines at scale (seriously).

  4. Run your pipelines as standalone processes or integrate them into your existing Python and TypeScript/JS applications utilizing our SDK.

  5. Use it, commit it, ship it. 🚚

Useful Links


<p align="center">Made with ❤️ in 🌁 SF & 🇪🇺 EU</p>

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured