EX MCP Server

EX MCP Server

Provides unified development tools including code analysis, debugging, refactoring, documentation, testing, and project automation through multiple LLM providers (KIMI, GLM, OpenRouter). Features agentic audit capabilities with multi-model consensus for finding issues and generating direct fixes.

Category
Visit Server

README

EX MCP Server

EX MCP Server is a Model Context Protocol (MCP) server that connects modern LLM providers and tools to MCP‑compatible clients (e.g., Claude Desktop/CLI). It provides a unified set of analysis, debugging, refactoring, documentation, testing, and project automation tools accessible over the MCP stdio protocol.

Key Capabilities

  • Unified MCP server exposing rich development tools:
    • analyze, codereview, debug, refactor, tracer, testgen, precommit, listmodels, version
  • Provider integrations:
    • KIMI (Moonshot), GLM (Zhipu), OpenRouter, and custom OpenAI‑compatible endpoints
  • MCP‑first architecture:
    • Subprocess stdio transport with direct config examples for Claude Desktop/CLI
  • Docker and local dev support:
    • Docker image build/publish, local virtualenv (.venv), and cross‑platform scripts

Installation

Prerequisites

  • Python 3.9+
  • Git
  • For local dev: virtualenv support
  • Optional: Docker and Docker Compose

Clone

git clone https://github.com/BeehiveInnovations/ex-mcp-server.git
cd ex-mcp-server

Setup (local)

python -m venv .venv
# Windows
.venv\\Scripts\\activate
# macOS/Linux
source .venv/bin/activate

pip install -r requirements.txt
pip install -r requirements-dev.txt

cp .env.example .env
# Set at least one provider key (KIMI_API_KEY, GLM_API_KEY, OPENROUTER_API_KEY, or CUSTOM_API_URL/CUSTOM_API_KEY)

Run (local)

python -m server   # or: python server.py

Configure a client (Claude Desktop/CLI)

Minimal example (stdio):

{
  "mcpServers": {
    "ex": {
      "type": "stdio",
      "trust": true,
      "command": "python",
      "args": ["-u", "scripts/mcp_server_wrapper.py"],
      "cwd": "/absolute/path/to/ex-mcp-server",
      "env": {
        "MCP_SERVER_NAME": "ex",
        "MCP_SERVER_ID": "ex-server",
        "PYTHONPATH": "/absolute/path/to/ex-mcp-server",
        "ENV_FILE": "/absolute/path/to/ex-mcp-server/.env"
      }
    }
  }
}

See the examples/ directory for more configs (macOS, WSL, desktop CLI variants).

Docker

Build and run locally:

docker build -t ex-mcp-server:latest .
docker run --rm -it ex-mcp-server:latest

A reverse proxy example is provided (nginx.conf) and a remote compose file (docker-compose.remote.yml) that exposes the server as ex-mcp.

Usage Overview

  • Use the version tool to verify install:
# In Claude Desktop config, call the version tool
  • Common tools:
    • analyze: smart file analysis
    • codereview: professional code review
    • debug: debugging assistant
    • refactor: code refactoring
    • tracer: static analysis / call chain aid
    • testgen: test generation
    • precommit: quick pre-commit validation
    • listmodels: show available models/providers

Provider‑native Web Browsing Schemas

  • Kimi (Moonshot): inject an OpenAI function tool named "web_search" with a string parameter "query".
  • GLM (Zhipu): enable tools = [{"type":"web_search","web_search":{}}] only when allowed by env.
  • Set these via env for production readiness:
    • KIMI_ENABLE_INTERNET_TOOL=true and KIMI_INTERNET_TOOL_SPEC to a valid JSON tool schema
    • GLM_ENABLE_WEB_BROWSING=true when appropriate (and other GLM browsing flags as documented)

Hidden Model Router (Auto Model Selection)

The server can auto-select a concrete model at the MCP boundary so users don’t need to specify one.

  • Enable: HIDDEN_MODEL_ROUTER_ENABLED=true
  • Sentinels: ROUTER_SENTINEL_MODELS=glm-4.5-flash,auto
  • Default: DEFAULT_MODEL=glm-4.5-flash (a sentinel)

Behavior:

  • If a tool requires a model and incoming model is a sentinel (or "auto"), the server resolves a concrete model.
  • Structured logs emitted by the server (logger name: "server"):
    • EVENT boundary_model_resolution_attempt input_model=... tool=... sentinel_match=... hidden_router=...
    • EVENT boundary_model_resolved input_model=... resolved_model=... tool=...

Notes:

  • The Consensus tool intentionally does not resolve models at the MCP boundary (requires_model = False). You will see the "attempt" log at the boundary, and per-step model selection happens inside the tool.

Tip: Use listmodels to see configured providers/models.

Agentic Audit with Real Models (EX‑AI)

Use a consensus-based, multi-model audit to find issues and get direct fixes.

  1. Set provider keys in .env:
  • KIMI_API_KEY=...
  • GLM_API_KEY=...
  1. Run the audit script:
python scripts/exai_agentic_audit.py --models glm-4.5-air kimi-k2-0905-preview

Or rely on env defaults (GLM_AUDIT_MODEL, KIMI_AUDIT_MODEL) and just:

python scripts/exai_agentic_audit.py

The script returns JSON:

{
  "issues": [ { "title": str, "evidence": str, "direct_fix": str }... ],
  "summary": str
}
  1. Interpreting results:
  • Each issue has “direct_fix” with exactly what to change and where.
  • Re-run after fixes to validate improvements.

Tests: End-to-end (no real keys required)

We include an “ultimate” test file designed for EX‑AI‑style validation:

  • tests/test_e2e_exai_ultimate.py
  • Each assert prints a Direct Fix if it fails.
  • Run: python -m pytest -q tests/test_e2e_exai_ultimate.py

CI/Test Hygiene (EX fork)

This fork disables some upstream providers by design. If you run the full test suite, import errors may occur for those optional providers. See docs/ci-test-notes.md for ways to skip/guard those tests in CI.

Configuration

  • Environment file: .env (see .env.example for available variables)
  • Key variables:
    • DEFAULT_MODEL, LOCALE, MAX_MCP_OUTPUT_TOKENS
    • Provider keys: KIMI_API_KEY, GLM_API_KEY, OPENROUTER_API_KEY
    • Custom API: CUSTOM_API_URL, CUSTOM_API_KEY
  • Logging: logs/ directory (Docker and local scripts manage ownership/paths)

Attribution

This project is based on the original work at:

  • https://github.com/BeehiveInnovations/zen-mcp-server We have forked/copied and adapted it to create EX MCP Server. Attribution to the original authors is preserved.

Our EX‑specific Changes (Zen → EX)

  • Rebranding:
    • Service name: zen-mcp → ex-mcp
    • Non-root user: zenuser → exuser (Dockerfile, file ownership)
    • Virtual environment: .zen_venv → .venv
    • Branding strings: “Zen MCP Server” → “EX MCP Server”
  • Examples/configs:
    • Server IDs: "zen" → "ex"
    • Commands: zen-mcp-server → ex-mcp-server
    • Paths updated to ex-mcp-server
  • CI/workflows & templates:
    • GitHub discussions/links point to ex-mcp-server
    • GHCR image names: ghcr.io/<org>/ex-mcp-server:...
  • Architecture intent:
    • MCP-first stdio transport, reverse proxy alignment, and consistent service naming

Contributing

Please see CONTRIBUTING.md for development workflow, coding standards, and testing.

License

See the LICENSE file in this repository.

Additional Resources

  • MCP Spec: https://modelcontextprotocol.io/
  • Claude Desktop docs for MCP: https://docs.anthropic.com/claude/docs/model-context-protocol
  • Original source (upstream): https://github.com/BeehiveInnovations/zen-mcp-server
  • Current project: https://github.com/BeehiveInnovations/ex-mcp-server

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured