Launch Engine

Launch Engine

Agentic pipeline that transforms ideas to revenue — for solo founders and bootstrappers.

Category
Visit Server

README

Launch Engine

npm version License: MIT GitHub stars

Agentic pipeline that transforms ideas to revenue — for solo founders and bootstrappers.

npx -y launch-engine-mcp

Launch Engine Demo


Why Launch Engine?

Most MCP servers give you one tool. A GitHub integration. A database query. A Slack bot.

Launch Engine gives you 35 tools that work as a pipeline — the entire playbook from raw idea to validated revenue, running inside the AI client you already use.

  • No more blank-page paralysis. Start with scout and the system tells you exactly what to do next, every step of the way.
  • Every stage feeds the next. Buyer research flows into offer design. Offer design flows into campaign copy. Campaign copy flows into validation. Nothing is wasted.
  • Math before assets. Unit economics are validated before you build anything. You'll never spend weeks building an offer that can't work at your budget.
  • Test ideas for $50, not $5,000. rapid_test gives you signal in 3-5 days with a landing page and paid traffic — before you commit to the full pipeline.
  • Your AI becomes a co-founder, not a chatbot. It doesn't just answer questions. It executes a structured business system with you.

Install

npm install -g launch-engine-mcp

Or run directly without installing:

npx -y launch-engine-mcp

Quick Start

Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "launch-engine": {
      "command": "npx",
      "args": ["-y", "launch-engine-mcp"],
      "env": {
        "LAUNCH_ENGINE_PROJECT_DIR": "/path/to/your/project"
      }
    }
  }
}

Cursor

Add to your MCP settings (.cursor/mcp.json):

{
  "mcpServers": {
    "launch-engine": {
      "command": "npx",
      "args": ["-y", "launch-engine-mcp"],
      "env": {
        "LAUNCH_ENGINE_PROJECT_DIR": "/path/to/your/project"
      }
    }
  }
}

From Source

git clone https://github.com/ZionHopkins/launch-engine-mcp.git
cd launch-engine-mcp
npm install
npm run build
node dist/index.js

How It Works

Launch Engine is a two-layer tool system:

Layer A — 35 SOP Tools (read-only): Each tool validates prerequisites against pipeline-state.json, loads upstream context from previous stages, checks learnings.json for patterns, and returns full SOP instructions enriched with that context. Your AI executes the instructions.

Layer B — 3 Utility Tools (mutations): update_pipeline_state, save_asset, capture_learning. These handle all state writes and file creation. Your AI calls them after executing each SOP.

The Pipeline

Three entry points:

1. scout        → Full pipeline (research → offer → build → deploy → validate)
2. rapid_test   → Quick $50-100 test (signal in 3-5 days)
3. passive_deploy → Marketplace assets (after research)

Full Pipeline Flow

LAYER 1 (Strategist):
  scout → autonomy → market_intel → research → build_blocks → stress_test → unit_economics

LAYER 2 (Builder):
  name_lock → platform + product → deploy → qa → validate_prep

LAYER 3 (Validator):
  validate_check (daily) → validate_decide → feedback → iterate

TRAFFIC:
  traffic_strategy → channels → creative_test → funnel_optimize → scale

CROSS-CUTTING:
  status | daily_check | lessons | voice_extract | dream_100

Each tool checks prerequisites automatically. If you try to run research before completing market_intel, you'll get a clear STAGE_BLOCKED message telling you exactly what to run first.

Tools Reference

SOP Tools (35)

Tool Description Prerequisites
scout Market scanning — takes a raw idea, determines viability None (entry point)
autonomy Agent Autonomy Score — AI-buildable product viability scout
market_intel Deep market research with competitive scoring scout, autonomy
research Therapeutic Buyer Engine — deep persona research market_intel
build_blocks 7 Building Blocks from buyer research research
stress_test Offer scoring across 10 dimensions build_blocks
unit_economics CPA, LTV, break-even modeling stress_test
name_lock Lock business/product name stress_test, unit_economics
platform Tech stack selection and scoring stress_test
product Product architecture design stress_test, name_lock
deploy Sales pages, emails, ad copy generation name_lock, platform, product
qa 7-check persona alignment gate deploy
validate_prep Validation deployment package deploy, qa
validate_check Daily 60-second health check validate_prep
validate_decide End-of-window verdict validate_prep
feedback Performance diagnosis and fix routing deploy
traffic_strategy Traffic channel research and scoring deploy
channels Channel setup and configuration traffic_strategy
creative_test Ad creative variation testing channels
funnel_optimize CRO testing across conversion funnel channels
scale Systematic scaling of validated channels creative_test
traffic_analytics Performance reporting and attribution channels
dream_100 Relationship strategy and outreach research
passive_deploy Marketplace asset scoring and specs research
passive_check Scheduled performance checks passive_deploy
passive_compound Deploy related assets around anchors passive_deploy
passive_portfolio Quarterly portfolio review passive_deploy
rapid_test Quick idea test — landing page + ads None (entry point)
rapid_check Daily metrics vs. thresholds rapid_test
rapid_graduate Graduate test to full pipeline rapid_check
rapid_status Dashboard of all rapid tests None
status Pipeline status report None
daily_check 5-minute daily operations pulse Live campaigns
lessons Pattern library — capture and retrieve None
voice_extract Brand voice extraction from content qa

Utility Tools (3)

Tool Description
update_pipeline_state Update pipeline-state.json with dot-notation paths
save_asset Save files to assets/[market-name]/ directory
capture_learning Capture reusable patterns to learnings.json

Project Directory Structure

Launch Engine creates and manages files in your project directory:

your-project/
├── pipeline-state.json      # Pipeline progress tracking
├── learnings.json            # Pattern library across pipelines
└── assets/
    └── [market-name]/
        ├── research/         # Scout reports, buyer research, market intel
        ├── building-blocks/  # The 7 Building Blocks
        ├── product/          # Product Architecture Blueprint
        ├── copy/             # Sales letters, email sequences
        ├── campaigns/        # Landing pages, ad copy
        ├── traffic/          # Traffic strategy, creative tests, analytics
        ├── validation/       # Deployment packages, daily checks, verdicts
        ├── voice/            # Brand voice calibration
        ├── passive-portfolio/ # PADA outputs
        └── rapid-test/       # Rapid test assets

Configuration

The project directory is resolved in order:

  1. LAUNCH_ENGINE_PROJECT_DIR environment variable
  2. --project-dir= CLI argument
  3. Current working directory

First Use

When you run status with no existing pipeline, you'll see:

Three paths available:

  1. rapid_test — $50-100 paid traffic test in 3-5 days
  2. scout — Full active pipeline with deep research and validation
  3. passive_deploy — Marketplace assets (requires research first)

Best Practices

Getting Started

  • Start with status — always run this first. It reads your pipeline state and tells you exactly where you are and what to do next.
  • New idea? Use rapid_test first — don't run the full pipeline on an unvalidated idea. Spend $50-100 to get signal in 3-5 days. If it graduates, then run scout.
  • One pipeline at a time — you can run multiple rapid tests in parallel, but focus on one full pipeline at a time. Context switching kills momentum.

During the Pipeline

  • Follow the order — the prerequisite system exists for a reason. Each stage feeds the next. Skipping market_intel means research has no competitive context. Skipping stress_test means you might build assets for a broken offer.
  • Don't skip qa — it catches promise-product misalignment, unattributed statistics, and persona drift. Every asset that touches a buyer must clear the QA gate.
  • Run daily_check every day during validation — it takes 60 seconds and catches problems before they burn budget.
  • Use lessons after every major decision — verdicts (ADVANCE/KILL), graduated rapid tests, creative test winners. The pattern library makes every future pipeline smarter.

Working with the AI

  • Let the AI execute the full SOP — each tool returns complete instructions. Don't interrupt midway. Let it finish the research, generate the deliverables, and save the files.
  • Review Tier 3/4 decisions carefully — the system will pause and ask for your input on market selection, pricing, kill decisions, and anything involving real money. These pauses are intentional.
  • Trust the mathunit_economics will tell you if the numbers work at your budget. If the verdict is NON-VIABLE, don't try to force it. Move on or adjust the offer.

Scaling

  • Validate before you scalescale requires proven creative winners with 30+ conversions. Scaling unvalidated campaigns is the fastest way to burn money.
  • Compound your learnings — passive assets that reach ANCHOR status should trigger passive_compound. One proven asset can spawn 5-10 related assets.
  • Run traffic_analytics weekly — attribution drift happens. What worked last week may not work next week. Stay on top of the data.

Common Mistakes to Avoid

  • Don't build assets before stress_test passes — a GO verdict means the offer is structurally sound. REVISE or REBUILD means fix the foundation first.
  • Don't skip name_lock — changing the business name after assets are built means rebuilding everything. Lock it early.
  • Don't ignore KILL signals — if rapid test metrics hit kill thresholds, kill it. If validation says KILL, capture the lessons and move on. Sunk cost is not a strategy.
  • Don't publish without qa clearance — unvetted copy with unattributed claims or persona misalignment damages trust and conversion rates.
  • Don't run the full pipeline for every idea — that's what rapid_test is for. Test 5-10 ideas cheaply, then invest the full pipeline in the winner.

Listings

Listed on MCP Server Hub | MCP Registry

License

MIT

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured