claude-concilium

claude-concilium

Multi-agent AI consultation framework for Claude Code via MCP — get second opinions from OpenAI, Gemini, Qwen, DeepSeek

Category
Visit Server

README

Claude Concilium

License: MIT Node.js 18+ MCP Protocol Servers Smoke Tests

Multi-agent AI consultation framework for Claude Code via MCP.

Get a second (and third) opinion from other LLMs when Claude Code alone isn't enough.

Claude Code ──┬── OpenAI (Codex CLI) ──► Opinion A
              ├── Gemini (gemini-cli) ─► Opinion B
              │
              └── Synthesis ◄── Consensus or iterate

The Problem

Claude Code is powerful, but one brain can miss bugs, overlook edge cases, or get stuck in a local optimum. Critical decisions benefit from diverse perspectives.

The Solution

Concilium runs parallel consultations with multiple LLMs through standard MCP protocol. Each LLM server wraps a CLI tool — no API keys needed for the primary providers (they use OAuth).

Key features:

  • Parallel consultation with 2+ AI agents
  • Production-grade fallback chains with error detection
  • Each MCP server works standalone or as part of Concilium
  • Plug & play: clone, npm install, add to .mcp.json

Architecture

┌─────────────────────────────────────────────────────────┐
│                     Claude Code                          │
│                                                          │
│  "Review this code for race conditions"                  │
│                                                          │
│  ┌──────────────┐  ┌──────────────┐                      │
│  │  MCP Call #1  │  │  MCP Call #2  │   (parallel)        │
│  └──────┬───────┘  └──────┬───────┘                      │
│         │                  │                              │
└─────────┼──────────────────┼──────────────────────────────┘
          │                  │
          ▼                  ▼
   ┌──────────────┐   ┌──────────────┐
   │  mcp-openai  │   │  mcp-gemini  │     Primary agents
   │  (codex exec)│   │ (gemini -p)  │
   └──────┬───────┘   └──────┬───────┘
          │                  │
          ▼                  ▼
   ┌──────────────┐   ┌──────────────┐
   │   OpenAI     │   │   Google     │     LLM providers
   │   (OAuth)    │   │   (OAuth)    │
   └──────────────┘   └──────────────┘

   Fallback chain (on quota/error):
   OpenAI → Qwen → DeepSeek
   Gemini → Qwen → DeepSeek

Quickstart

1. Clone and install

git clone https://github.com/spyrae/claude-concilium.git
cd claude-concilium

# Install dependencies for each server
cd servers/mcp-openai && npm install && cd ../..
cd servers/mcp-gemini && npm install && cd ../..
cd servers/mcp-qwen && npm install && cd ../..

# Verify all servers work (no CLI tools required)
node test/smoke-test.mjs

Expected output:

PASS mcp-openai  (Tools: openai_chat, openai_review)
PASS mcp-gemini  (Tools: gemini_chat, gemini_analyze)
PASS mcp-qwen    (Tools: qwen_chat)
All tests passed.

2. Set up providers

Pick at least 2 providers:

Provider Auth Free Tier Setup
OpenAI codex login (OAuth) ChatGPT Plus weekly credits Setup guide
Gemini Google OAuth 1000 req/day Setup guide
Qwen qwen login or API key Varies Setup guide
DeepSeek API key Pay-per-use (cheap) Setup guide

3. Add to Claude Code

Copy config/mcp.json.example and update paths:

# Edit the example with your actual paths
cp config/mcp.json.example .mcp.json
# Update "/path/to/claude-concilium" with actual path

Or add servers individually to your existing .mcp.json:

{
  "mcpServers": {
    "mcp-openai": {
      "type": "stdio",
      "command": "node",
      "args": ["/absolute/path/to/servers/mcp-openai/server.js"],
      "env": {
        "CODEX_HOME": "~/.codex-minimal"
      }
    },
    "mcp-gemini": {
      "type": "stdio",
      "command": "node",
      "args": ["/absolute/path/to/servers/mcp-gemini/server.js"]
    }
  }
}

4. Install the skill (optional)

Copy the Concilium skill to your Claude Code commands:

cp skill/ai-concilium.md ~/.claude/commands/ai-concilium.md

Now use /ai-concilium in Claude Code to trigger a multi-agent consultation.

MCP Servers

Each server can be used independently — you don't need all of them.

Server CLI Tool Auth Tools
mcp-openai codex OAuth (ChatGPT Plus) openai_chat, openai_review
mcp-gemini gemini Google OAuth gemini_chat, gemini_analyze
mcp-qwen qwen API key / CLI login qwen_chat

DeepSeek uses the existing deepseek-mcp-server npm package — no custom server needed.

How It Works

Consultation Flow

  1. Formulate — describe the problem concisely (under 500 chars)
  2. Send in parallel — OpenAI + Gemini get the same prompt
  3. Handle errors — if a provider fails, fallback chain kicks in (Qwen → DeepSeek)
  4. Synthesize — compare responses, find consensus
  5. Iterate (optional) — resolve disagreements with follow-up questions
  6. Decide — apply the synthesized solution

Error Detection

All servers detect provider-specific errors and return structured responses:

Error Type Meaning Action
QUOTA_EXCEEDED Rate/credit limit hit Use fallback provider
AUTH_EXPIRED / AUTH_REQUIRED Token needs refresh Re-authenticate CLI
MODEL_NOT_SUPPORTED Model unavailable on plan Use default model
Timeout Process hung Auto-killed, use fallback

Fallback Chain

Primary:   OpenAI ──────────────► Response
           (QUOTA_EXCEEDED?)
                    │
Fallback 1: Qwen ──┴────────────► Response
           (timeout?)
                    │
Fallback 2: DeepSeek ───────────► Response (always available)

When to Use Concilium

Scenario Recommended Agents
Code review OpenAI + Gemini (parallel)
Architecture decision OpenAI + Gemini → iterate if disagree
Stuck bug (3+ attempts) All available agents
Performance optimization Gemini (1M context) + OpenAI
Security review OpenAI + Gemini + manual verification

Customization

See docs/customization.md for:

  • Adding your own LLM provider
  • Modifying the fallback chain
  • MCP server template
  • Custom prompt strategies

Documentation

License

MIT

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured