MCP Perplexica
Enables LLMs to perform web searches through Perplexica with multiple focus modes (web, academic, YouTube, Reddit) and optimization settings, returning AI-generated responses with source citations.
README
MCP Perplexica
MCP server proxy for Perplexica search API.
This server allows LLMs to perform web searches through Perplexica using the Model Context Protocol (MCP).
Features
- š Web search through Perplexica
- š Multiple focus modes (web, academic, YouTube, Reddit, etc.)
- ā” Configurable optimization modes (speed, balanced, quality)
- š§ Customizable model configuration
- š Source citations in responses
- š Multiple transport modes (stdio, SSE, Streamable HTTP)
Prerequisites
- Python 3.11+
- UV package manager
- Running Perplexica instance
Installation
- Clone the repository:
git clone https://github.com/Kaiohz/mcp-perplexica.git
cd mcp-perplexica
- Install dependencies with UV:
uv sync
- Create your environment file:
cp .env.example .env
- Edit
.envwith your configuration:
# Perplexica API
PERPLEXICA_URL=http://localhost:3000
# Transport: stdio (default), sse, or streamable-http
TRANSPORT=stdio
HOST=127.0.0.1
PORT=8000
# Model configuration
DEFAULT_CHAT_MODEL_PROVIDER_ID=your-provider-id
DEFAULT_CHAT_MODEL_KEY=anthropic/claude-sonnet-4.5
DEFAULT_EMBEDDING_MODEL_PROVIDER_ID=your-provider-id
DEFAULT_EMBEDDING_MODEL_KEY=openai/text-embedding-3-small
Usage
Transport Modes
The server supports three transport modes:
| Transport | Description | Use Case |
|---|---|---|
stdio |
Standard input/output | CLI tools, Claude Desktop |
sse |
Server-Sent Events over HTTP | Web clients |
streamable-http |
Streamable HTTP (recommended for production) | Production deployments |
Running with Docker Compose
The easiest way to run both Perplexica and MCP Perplexica together:
# Copy and configure environment files
cp .env.example .env
cp .env.perplexica.example .env.perplexica
# Edit .env with your MCP Perplexica settings
# Edit .env.perplexica with your Perplexica settings
# Start services
docker compose up -d
This starts:
- Perplexica on
http://localhost:3000 - MCP Perplexica connected to Perplexica
Running the MCP Server (without Docker)
Stdio mode (default)
uv run python -m main
SSE mode
TRANSPORT=sse PORT=8000 uv run python -m main
Streamable HTTP mode
TRANSPORT=streamable-http PORT=8000 uv run python -m main
Claude Desktop Configuration
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"perplexica": {
"command": "uv",
"args": ["run", "--directory", "/path/to/mcp-perplexica", "python", "-m", "main"],
"env": {
"PERPLEXICA_URL": "http://localhost:3000",
"TRANSPORT": "stdio",
"DEFAULT_CHAT_MODEL_PROVIDER_ID": "your-provider-id",
"DEFAULT_CHAT_MODEL_KEY": "anthropic/claude-sonnet-4.5",
"DEFAULT_EMBEDDING_MODEL_PROVIDER_ID": "your-provider-id",
"DEFAULT_EMBEDDING_MODEL_KEY": "openai/text-embedding-3-small"
}
}
}
}
Claude Code Configuration
For HTTP-based transports, you can add the server to Claude Code:
# Start the server with streamable-http transport
TRANSPORT=streamable-http PORT=8000 uv run python -m main
# Add to Claude Code
claude mcp add --transport http perplexica http://localhost:8000/mcp
Available Tools
search
Perform a web search using Perplexica.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
query |
string | Yes | The search query |
focus_mode |
string | No | Search focus: webSearch, academicSearch, writingAssistant, wolframAlphaSearch, youtubeSearch, redditSearch |
optimization_mode |
string | No | Optimization: speed, balanced, quality |
system_instructions |
string | No | Custom instructions for AI response |
chat_model_provider_id |
string | No | Override default chat model provider |
chat_model_key |
string | No | Override default chat model |
embedding_model_provider_id |
string | No | Override default embedding provider |
embedding_model_key |
string | No | Override default embedding model |
Example:
Search for "latest developments in AI" using academic focus
Development
Install dev dependencies
uv sync --dev
Run tests
uv run pytest
Run linter
uv run ruff check .
uv run ruff format .
uv run black src/
Architecture
This project follows hexagonal architecture:
src/
āāā main.py # MCP server entry point
āāā config.py # Pydantic Settings
āāā dependencies.py # Dependency injection
āāā domain/ # Business core (pure Python)
ā āāā entities.py # Dataclasses
ā āāā ports.py # ABC interfaces
āāā application/ # Use cases
ā āāā requests.py # Pydantic DTOs
ā āāā use_cases.py # Business logic
āāā infrastructure/ # External adapters
āāā perplexica/
āāā adapter.py # HTTP client
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.