
Stealthee MCP
Enables detection and analysis of pre-public product launches through web search, content extraction, AI-powered scoring, and automated alerting. Provides comprehensive tools for surfacing stealth startup signals before they trend publicly.
README
Stealthee MCP - Tools for being early
Stealthee is a dev-first system for surfacing pre-public product signals - before they trend. It combines search, extraction, scoring, and alerting into a plug-and-play pipeline you can integrate into Claude, LangGraph, Smithery, or your own AI stack via MCP.
Use it if you're:
- An investor hunting for pre-traction signals
- A founder scanning for competitors before launch
- A researcher tracking emerging markets
- A developer building agents, dashboards, or alerting tools that need fresh product intel.
What's cookin'?
MCP Tools
Tool | Description |
---|---|
web_search |
Search the web for stealth launches (Tavily) |
url_extract |
Extract content from URLs (BeautifulSoup) |
score_signal |
AI-powered signal scoring (OpenAI) |
batch_score_signals |
Batch process multiple signals |
search_tech_sites |
Search tech news sites only |
parse_fields |
Extract structured fields from HTML |
run_pipeline |
End-to-end detection pipeline |
Installation & Setup
Prerequisites
- API keys for external services (see Environment Variables)
Quick Start
-
Clone and Setup
git clone https://github.com/rainbowgore/stealthee-MCP-tools cd stealthee-MCP-tools python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt
-
Configure Environment
Fill the
.env
file with your API keys:# Required TAVILY_API_KEY=your_tavily_key_here OPENAI_API_KEY=your_openai_key_here NIMBLE_API_KEY=your_nimble_key_here # Optional SLACK_WEBHOOK_URL=your_slack_webhook_here
-
Start Servers
# MCP Server (for Claude Desktop) python mcp_server_stdio.py # FastMCP Server (for Smithery) smithery dev # FastAPI Server (Optional - Legacy) python start_fastapi.py
Smithery & Claude Desktop Integration
All MCP tools listed above are available out-of-the-box in Smithery. Smithery is a visual agent and workflow builder for AI tools, letting you chain, test, and orchestrate these tools with no code.
Available Tools
- web_search: Search the web for stealth launches using Tavily.
- url_extract: Extract and clean content from any URL.
- score_signal: Use OpenAI to score a single signal for stealthiness.
- batch_score_signals: Score multiple signals in one go.
- search_tech_sites: Search only trusted tech news sources.
- parse_fields: Extract structured fields (like pricing, changelog) from HTML.
- run_pipeline: End-to-end pipeline: search, extract, parse, score, and store.
How to Use in Smithery
- Open the Stealthee MCP Tools page on Smithery.
- Click "Try in Playground" to test any tool interactively.
- Use the visual workflow builder to chain tools together (e.g., search → extract → score).
- Integrate with Claude Desktop or your own agents by copying the workflow or using the API endpoints provided by Smithery.
Claude Desktop Integration
Add to your Claude Desktop config.json
file:
{
"mcpServers": {
"stealth-mcp": {
"command": "/path/to/stealthee-MCP-tools/.venv/bin/python",
"args": ["/path/to/stealthee-MCP-tools/mcp_server_stdio.py"],
"cwd": "/path/to/stealthee-MCP-tools",
"env": {
"TAVILY_API_KEY": "your_tavily_key",
"OPENAI_API_KEY": "your_openai_key"
}
}
}
}
Tool Use Cases
For Analysts & Builders:
web_search
: Find stealth product mentions across the weburl_extract
: Pull and clean raw text from landing pagesscore_signal
: Judge how likely a change log implies launchbatch_score_signals
: Quickly triage dozens of scraped URLssearch_tech_sites
: Limit queries to trusted domains onlyparse_fields
: Extract pricing/release info from messy HTMLrun_pipeline
: Full pipeline — search → extract → parse → score
🔬 Signal Intelligence Workflow
- Search Phase: Use
web_search
orsearch_tech_sites
to find relevant URLs - Extraction Phase: Use
url_extract
to get clean content from URLs - Parsing Phase: Use
parse_fields
to extract structured data (pricing, changelog, etc.) - Analysis Phase: Use
score_signal
orbatch_score_signals
for AI-powered analysis - Storage Phase: All signals are stored in SQLite database
- Alert Phase: High-confidence signals trigger Slack notifications
⚙️ FastAPI Server
You can also run this project as a FastAPI server for REST-style access to all MCP tools.
Base Endpoints
- Swagger UI: http://localhost:8000/docs
- Health Check: http://localhost:8000/health
- Tool Manifest: http://localhost:8000/tools
Example Usage
Search for stealth launches:
curl -X POST "http://localhost:8000/tools/web_search" \
-H "Content-Type: application/json" \
-d '{"query": "stealth startup AI", "num_results": 5}'
Run full detection pipeline:
curl -X POST "http://localhost:8000/tools/run_pipeline" \
-H "Content-Type: application/json" \
-d '{"query": "new AI product launch", "num_results": 3}'
Pipeline Parameters
query
(required): Search phrase (e.g. "AI roadmap")num_results
(optional, default: 5): Number of search results to analyzetarget_fields
(optional, default: ["pricing", "changelog"]): Fields to extract from HTML
What run_pipeline Does
- Searches tech and stealth-friendly sources using Tavily
- Extracts raw content from each result
- Parses structured signals (pricing, changelog, etc.)
- Scores each result with OpenAI to estimate stealthiness
- Stores results in local SQLite
- Notifies via Slack if confidence is high
AI Scoring Logic
The score_signal and batch_score_signals tools use GPT-3.5 to evaluate:
- Stealth indicators (e.g. private changelogs, missing press, beta flags)
- Confidence level (Low / Medium / High)
- Textual reasoning (used in UI or alerting)
Database Schema (data/signals.db)
Field | Type | Description |
---|---|---|
id |
INTEGER | Primary key |
url |
TEXT | Source URL |
title |
TEXT | Signal title |
html_excerpt |
TEXT | First 500 characters of content |
changelog |
TEXT | Parsed changelog (optional) |
pricing |
TEXT | Parsed pricing info (optional) |
score |
REAL | Stealth likelihood (0–1) |
confidence |
TEXT | Confidence level |
reasoning |
TEXT | AI rationale for the score |
created_at |
TEXT | ISO timestamp |
Dev Quickstart (FastAPI)
python start_fastapi.py
Then visit: http://localhost:8000/docs
Built with 💜 for those who spot what others miss.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.