brainstorm-mcp
Enables multi-round brainstorming debates between multiple AI models like GPT, DeepSeek, and Ollama to produce synthesized final outputs. Users can orchestrate parallel model interactions where AI agents critique and refine each other's ideas to reach a consolidated conclusion.
README
brainstorm-mcp
An MCP server that runs multi-round brainstorming debates between AI models. Connect it to Claude Code (or any MCP client) and let GPT, DeepSeek, Groq, Ollama, and others debate your ideas — then get a synthesized final output.
How it works
- You ask Claude: "Brainstorm the best architecture for a real-time app"
- The tool sends the topic to all configured AI models in parallel
- Each model responds independently (Round 1)
- Models see each other's responses and refine their positions (Rounds 2-N)
- A synthesizer model produces a final consolidated output
- You get back a structured debate with the synthesis
Quick Start
# Clone and build
git clone https://github.com/AIPoweredSolutions/brainstorm-mcp.git
cd brainstorm-mcp
npm install
npm run build
Configure providers
Copy the example config and add your API keys:
cp brainstorm.config.example.json brainstorm.config.json
Edit brainstorm.config.json:
{
"providers": {
"openai": {
"model": "gpt-4o",
"apiKeyEnv": "OPENAI_API_KEY"
},
"deepseek": {
"model": "deepseek-chat",
"apiKeyEnv": "DEEPSEEK_API_KEY"
}
}
}
Connect to Claude Code
Add to your project's .mcp.json:
{
"mcpServers": {
"brainstorm": {
"command": "node",
"args": ["/path/to/brainstorm-mcp/dist/index.js"],
"env": {
"OPENAI_API_KEY": "sk-...",
"DEEPSEEK_API_KEY": "sk-...",
"BRAINSTORM_CONFIG": "/path/to/brainstorm.config.json"
}
}
}
}
Restart Claude Code, then just ask:
"Brainstorm the best way to handle authentication in a microservices architecture"
Configuration
brainstorm.config.json
The config file defines AI providers. Known providers (openai, deepseek, groq, mistral, together) don't need a baseURL — it's auto-detected.
{
"providers": {
"openai": {
"model": "gpt-4o",
"apiKeyEnv": "OPENAI_API_KEY"
},
"deepseek": {
"model": "deepseek-chat",
"apiKeyEnv": "DEEPSEEK_API_KEY"
},
"groq": {
"model": "llama-3.3-70b-versatile",
"apiKeyEnv": "GROQ_API_KEY"
},
"ollama": {
"model": "llama3.1",
"baseURL": "http://localhost:11434/v1"
}
}
}
| Field | Required | Description |
|---|---|---|
model |
Yes | Default model ID to use |
apiKeyEnv |
No | Environment variable name for the API key. Omit for local models (Ollama) |
baseURL |
No | API endpoint. Auto-detected for known providers |
Fallback: Environment Variables
If no config file exists, the server detects providers from env vars:
OPENAI_API_KEY=sk-...
OPENAI_DEFAULT_MODEL=gpt-4o
DEEPSEEK_API_KEY=sk-...
DEEPSEEK_DEFAULT_MODEL=deepseek-chat
MCP Tools
brainstorm
Run a multi-round debate. Only topic is required — everything else has sensible defaults.
| Parameter | Type | Default | Description |
|---|---|---|---|
topic |
string | required | What to brainstorm about |
models |
string[] | all providers | Specific models as provider:model |
rounds |
number | 3 | Number of debate rounds (1-10) |
synthesizer |
string | first model | Model for final synthesis |
systemPrompt |
string | — | Custom system prompt |
list_providers
Shows all configured providers, their default models, and API key status.
add_provider
Dynamically add a provider at runtime.
Features
- Multi-round debates — Models see and critique each other's responses
- Parallel execution — All models respond concurrently within each round
- Per-model timeouts — 2-minute timeout per API call, one slow model won't block others
- Context truncation — Automatically truncates history when approaching context limits
- Cost estimation — Shows estimated token usage and cost
- Resilient — One model failing doesn't abort the debate
- Synthesizer fallback — If the primary synthesizer fails, tries other models
- GPT-5.x / o3 / o4 compatible — Automatically uses
max_completion_tokensfor newer OpenAI models
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.