Polybrain MCP Server

Polybrain MCP Server

Enables AI agents to connect to and chat with multiple LLM models (OpenAI, OpenRouter, custom endpoints) with conversation history management and model switching capabilities.

Category
Visit Server

README

Polybrain MCP Server

An MCP (Model Context Protocol) server for connecting AI agents to multiple LLM models. Supports conversation history, model switching, and seamless Claude Code integration.

Features

  • Multi-model support (OpenAI, OpenRouter, custom endpoints)
  • Conversation history management
  • Switch models mid-conversation
  • Extended thinking/reasoning support (configurable by provider)
  • Pure MCP protocol (silent by default)
  • Automatic server management

Installation

npm install -g polybrain-mcp-server
# or
pnpm add -g polybrain-mcp-server

Quick Setup

1. Configure Models

Option A: YAML (recommended)

Create ~/.polybrain.yaml:

models:
  - id: "gpt-4o"
    modelName: "gpt-4o"
    baseUrl: "https://api.openai.com/v1"
    apiKey: "${OPENAI_API_KEY}"
    provider: "openai"

  - id: "gpt-5.1"
    modelName: "openai/gpt-5.1"
    baseUrl: "https://openrouter.io/api/v1"
    apiKey: "${OPENROUTER_KEY}"
    provider: "openrouter"

Set env vars:

export OPENAI_API_KEY="sk-..."
export OPENROUTER_KEY="sk-or-..."

Option B: Environment variables

export POLYBRAIN_BASE_URL="https://api.openai.com/v1"
export POLYBRAIN_API_KEY="sk-..."
export POLYBRAIN_MODEL_NAME="gpt-4o"

2. Add to Claude Code

Open Claude Code settings → MCP Servers, add:

{
  "mcpServers": {
    "polybrain": {
      "command": "polybrain"
    }
  }
}

Done! You can now use:

  • chat - Talk to any configured model
  • list_models - See available models
  • conversation_history - Access past conversations

Configuration Reference

Environment Variables

  • POLYBRAIN_BASE_URL - LLM API base URL
  • POLYBRAIN_API_KEY - API key
  • POLYBRAIN_MODEL_NAME - Model name
  • POLYBRAIN_HTTP_PORT - Server port (default: 32701)
  • POLYBRAIN_LOG_LEVEL - Log level (default: info)
  • POLYBRAIN_DEBUG - Enable debug logging to stderr
  • POLYBRAIN_CONFIG_PATH - Custom config file path

YAML Config Fields

httpPort: 32701                    # Optional
truncateLimit: 500                 # Optional
logLevel: info                      # Optional

models:                             # Required
  - id: "model-id"                 # Internal ID
    modelName: "actual-model-name"  # API model name
    baseUrl: "https://api.url/v1"  # API endpoint
    apiKey: "key or ${ENV_VAR}"    # API key
    provider: "openai"              # Optional: provider type for reasoning support

Supported Providers

The provider field enables provider-specific features like extended thinking/reasoning. If not specified, reasoning parameters will not be passed to the API (safe default).

Provider Reasoning Support Valid Values
OpenAI YES "openai"
OpenRouter VARIES "openrouter"

Examples:

  • Use provider: "openai" for OpenAI API models (GPT-4, o-series)
  • Use provider: "openrouter" for OpenRouter proxy service (supports 400+ models)
  • Omit provider field if your endpoint doesn't support reasoning parameters

Example with reasoning:

models:
  - id: "gpt-o1"
    modelName: "o1"
    baseUrl: "https://api.openai.com/v1"
    apiKey: "${OPENAI_API_KEY}"
    provider: "openai"           # Enables reasoning support

  - id: "gpt-5.1"
    modelName: "openai/gpt-5.1"
    baseUrl: "https://openrouter.io/api/v1"
    apiKey: "${OPENROUTER_KEY}"
    provider: "openrouter"       # Enables reasoning support

To use reasoning, set reasoning: true in the chat tool call. If the model and provider support it, you'll receive both the response and reasoning content.

Development

Setup

pnpm install

Build

pnpm build

Lint & Format

pnpm lint
pnpm format

Type Check

pnpm type-check

Development Mode

pnpm dev

Project Structure

src/
├── bin/polybrain.ts    # CLI entry point
├── launcher.ts         # Server launcher & management
├── http-server.ts      # HTTP server
├── index.ts            # Main server logic
├── mcp-tools.ts        # MCP tool definitions
├── conversation-manager.ts
├── openai-client.ts
├── config.ts
├── logger.ts
└── types.ts

How It Works

  1. Launcher checks if HTTP server is running
  2. Starts server in background if needed
  3. Connects to Claude Code via stdio MCP
  4. Routes requests to HTTP backend
  5. Maintains conversation history
  6. Responds with MCP protocol messages

Debugging

Enable debug logs to stderr:

{
  "mcpServers": {
    "polybrain": {
      "command": "polybrain",
      "env": {
        "POLYBRAIN_DEBUG": "true"
      }
    }
  }
}

Restart Server

After changing configuration in ~/.polybrain.yaml, restart the HTTP backend server:

polybrain --restart

This kills the background HTTP server. The next time you use polybrain, it will automatically start a fresh server with the updated configuration.

License

MIT

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured