
Elrond MCP
Enables enhanced decision-making through hierarchical LLM analysis, using three specialized critique agents (positive, neutral, negative) that analyze proposals in parallel and synthesize them into comprehensive, actionable insights. Helps overcome single-model biases by providing multi-perspective evaluation of complex ideas and proposals.
README
Elrond MCP - Thinking Augmentation Server
A Model Context Protocol (MCP) server that provides hierarchical LLM critique and synthesis for enhanced decision-making and idea evaluation.
[!WARNING] Preview Software: This is experimental software in active development and is not intended for production use. Features may change, break, or be removed without notice. Use at your own risk.
Overview
Elrond MCP implements a multi-agent thinking augmentation system that analyzes proposals through three specialized critique perspectives (positive, neutral, negative) and synthesizes them into comprehensive, actionable insights. This approach helps overcome single-model biases and provides more thorough analysis of complex ideas.
Features
- Parallel Critique Analysis: Three specialized agents analyze proposals simultaneously from different perspectives
- Structured Responses: Uses Pydantic models and
instructor
library for reliable, structured outputs - Google AI Integration: Leverages Gemini 2.5 Flash for critiques and Gemini 2.5 Pro for synthesis
- MCP Compliance: Full Model Context Protocol support for seamless integration with AI assistants
- Comprehensive Analysis: Covers feasibility, risks, benefits, implementation, stakeholder impact, and resource requirements
- Consensus Building: Identifies areas of agreement and disagreement across perspectives
Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Positive │ │ Neutral │ │ Negative │
│ Critique │ │ Critique │ │ Critique │
│ Agent │ │ Agent │ │ Agent │
│ │ │ │ │ │
│ Gemini 2.5 │ │ Gemini 2.5 │ │ Gemini 2.5 │
│ Flash │ │ Flash │ │ Flash │
└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘
│ │ │
│ │ │
└──────────────────────┼──────────────────────┘
│
▼
┌─────────────────────────┐
│ Synthesis Agent │
│ │
│ Gemini 2.5 Pro │
│ │
│ │
│ Consensus + Summary │
└─────────────────────────┘
Installation
Prerequisites
- Python 3.13 or higher
- Google AI API key (get one at Google AI Studio)
Setup
-
Clone the repository:
git clone <repository-url> cd elrond-mcp
-
Install dependencies:
# Using uv (recommended) uv sync --dev --all-extras # Or using pip pip install -e .[dev]
-
Configure API key:
export GEMINI_API_KEY="your-gemini-api-key-here" # Or create a .env file echo "GEMINI_API_KEY=your-gemini-api-key-here" > .env
Usage
Running the Server
Development Mode
# Using uv
uv run python main.py
# Using MCP CLI (if installed)
mcp dev elrond_mcp/server.py
Production Mode
# Direct execution
python main.py
# Or via package entry point
elrond-mcp
Integration with Claude Desktop
-
Install for Claude Desktop:
mcp install elrond_mcp/server.py --name "Elrond Thinking Augmentation"
-
Manual Configuration: Add to your Claude Desktop MCP settings:
{ "elrond-mcp": { "command": "python", "args": ["/path/to/elrond-mcp/main.py"], "env": { "GEMINI_API_KEY": "your-api-key-here" } } }
Using the Tools
Augment Thinking Tool
Analyze any proposal through multi-perspective critique:
Use the "consult_the_council" tool with this proposal:
# Project Alpha: AI-Powered Customer Service
## Overview
Implement an AI chatbot to handle 80% of customer service inquiries, reducing response time from 2 hours to 30 seconds.
## Goals
- Reduce operational costs by 40%
- Improve customer satisfaction scores
- Free up human agents for complex issues
## Implementation
- Deploy GPT-4 based chatbot
- Integrate with existing CRM
- 3-month rollout plan
- $200K initial investment
Check System Status Tool
Monitor the health and configuration of the thinking augmentation system:
Use the "check_system_status" tool to verify:
- API key configuration
- Model availability
- System health
Response Structure
Critique Response
Each critique agent provides:
- Executive Summary: Brief overview of the perspective
- Structured Analysis:
- Feasibility assessment
- Risk identification
- Benefit analysis
- Implementation considerations
- Stakeholder impact
- Resource requirements
- Key Insights: 3-5 critical observations
- Confidence Level: Numerical confidence (0.0-1.0)
Synthesis Response
The synthesis agent provides:
- Executive Summary: High-level recommendation
- Consensus View:
- Areas of agreement
- Areas of disagreement
- Balanced assessment
- Critical considerations
- Recommendation: Overall guidance
- Next Steps: Concrete action items
- Uncertainty Flags: Areas needing more information
- Overall Confidence: Synthesis confidence level
Development
Project Structure
elrond-mcp/
├── elrond_mcp/
│ ├── __init__.py
│ ├── server.py # MCP server implementation
│ ├── agents.py # Critique and synthesis agents
│ ├── client.py # Centralized Google AI client management
│ └── models.py # Pydantic data models
├── scripts/ # Development scripts
│ └── check.sh # Quality check script
├── tests/ # Test suite
├── main.py # Entry point
├── pyproject.toml # Project configuration
└── README.md
Running Tests
# Using uv
uv run pytest
# Using pip
pytest
Code Formatting
# Format and lint code
uv run ruff format .
uv run ruff check --fix .
# Type checking
uv run mypy elrond_mcp/
Development Script
For convenience, use the provided script to run all quality checks:
# Run all quality checks (lint, format, test)
./scripts/check.sh
This script will:
- Sync dependencies
- Run Ruff linter with auto-fix
- Format code with Ruff
- Execute the full test suite
- Perform final lint check
- Provide a pre-commit checklist
Configuration
Environment Variables
GEMINI_API_KEY
: Required Google AI API keyLOG_LEVEL
: Logging level (default: INFO)
Model Configuration
- Critique Agents:
gemini-2.5-flash
- Synthesis Agent:
gemini-2.5-pro
Models can be customized by modifying the agent initialization in agents.py
.
Troubleshooting
Common Issues
-
API Key Not Found
Error: Google AI API key is required
Solution: Set the
GEMINI_API_KEY
environment variable -
Empty Proposal Error
Error: Proposal cannot be empty
Solution: Ensure your proposal is at least 10 characters long
-
Model Rate Limits
Error: Rate limit exceeded
Solution: Wait a moment and retry, or check your Google AI quota
-
Validation Errors
ValidationError: ...
Solution: The LLM response didn't match expected structure. This is usually temporary - retry the request
Debugging
Enable debug logging:
export LOG_LEVEL=DEBUG
export GEMINI_API_KEY=your-api-key-here
python main.py
Check system status:
# Use the check_system_status tool to verify configuration
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run the test suite
- Submit a pull request
License
See LICENSE
Support
For issues and questions:
- Check the troubleshooting section above
- Review the logs for detailed error information
- Open an issue on the repository
Roadmap
- [ ] Support for additional LLM providers (OpenAI, Anthropic)
- [ ] Custom critique perspectives and personas
- [ ] Performance optimization and caching
- [ ] Advanced synthesis algorithms
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.