deepseek-mcp-server
MCP server for DeepSeek AI models (Chat + Reasoner). Supports multi-turn sessions, model fallback with circuit breaker, function calling, thinking mode, JSON output, multimodal input, and cost tracking.
README
DeepSeek MCP Server
<a href="https://glama.ai/mcp/servers/arikusi/deepseek-mcp-server"> <img width="380" height="200" src="https://glama.ai/mcp/servers/arikusi/deepseek-mcp-server/badge" /> </a>
MCP server for DeepSeek AI models (Chat + Reasoner). Supports stdio and HTTP transports, Docker deployment, multi-turn sessions, model fallback with circuit breaker, function calling, thinking mode, JSON output, multimodal input, and cost tracking.
Compatible with:
- Claude Code CLI
- Gemini CLI
- Any MCP-compatible client (Cursor, Windsurf, etc.)
Note: This is an unofficial community project and is not affiliated with DeepSeek.
Quick Start
Remote (No Install)
Use the hosted endpoint directly — no npm install, no Node.js required. Bring your own DeepSeek API key:
Claude Code:
claude mcp add --transport http deepseek \
https://deepseek-mcp.tahirl.com/mcp \
--header "Authorization: Bearer YOUR_DEEPSEEK_API_KEY"
Cursor / Windsurf / VS Code:
{
"mcpServers": {
"deepseek": {
"url": "https://deepseek-mcp.tahirl.com/mcp",
"headers": {
"Authorization": "Bearer ${DEEPSEEK_API_KEY}"
}
}
}
}
Local (stdio)
Claude Code:
claude mcp add -s user deepseek npx @arikusi/deepseek-mcp-server -e DEEPSEEK_API_KEY=your-key-here
Gemini CLI:
gemini mcp add deepseek npx @arikusi/deepseek-mcp-server -e DEEPSEEK_API_KEY=your-key-here
Scope options (Claude Code):
-s user: Available in all your projects (recommended)-s local: Only in current project (default)-s project: Project-specific.mcp.jsonfile
Get your API key: https://platform.deepseek.com
Features
- DeepSeek V3.2: Both models now run DeepSeek-V3.2 (since Sept 2025)
- Multi-Turn Sessions: Conversation context preserved across requests via
session_idparameter - Model Fallback & Circuit Breaker: Automatic fallback between models with circuit breaker protection against cascading failures
- MCP Resources:
deepseek://models,deepseek://config,deepseek://usage— query model info, config, and usage stats - Thinking Mode: Enable thinking on deepseek-chat with
thinking: {type: "enabled"} - JSON Output Mode: Structured JSON responses with
json_mode: true - Function Calling: OpenAI-compatible tool use with up to 128 tool definitions
- Cache-Aware Cost Tracking: Automatic cost calculation with cache hit/miss breakdown
- Session Management Tool: List, delete, and clear sessions via
deepseek_sessionstool - Configurable: Environment-based configuration with validation
- 12 Prompt Templates: Templates for debugging, code review, function calling, and more
- Streaming Support: Real-time response generation
- Multimodal Ready: Content part types for text + image input (enable with
ENABLE_MULTIMODAL=true) - Remote Endpoint: Hosted at
deepseek-mcp.tahirl.com/mcp— BYOK (Bring Your Own Key), no install needed - HTTP Transport: Self-hosted remote access via Streamable HTTP with
TRANSPORT=http - Docker Ready: Multi-stage Dockerfile with health checks for containerized deployment
- Tested: 253 tests with 90%+ code coverage
- Type-Safe: Full TypeScript implementation
- MCP Compatible: Works with any MCP-compatible CLI (Claude Code, Gemini CLI, etc.)
Installation
Prerequisites
- Node.js 18+
- A DeepSeek API key (get one at https://platform.deepseek.com)
Manual Installation
If you prefer to install manually:
npm install -g @arikusi/deepseek-mcp-server
From Source
- Clone the repository
git clone https://github.com/arikusi/deepseek-mcp-server.git
cd deepseek-mcp-server
- Install dependencies
npm install
- Build the project
npm run build
Usage
Once configured, your MCP client will have access to deepseek_chat and deepseek_sessions tools, plus 3 MCP resources.
Example prompts:
"Use DeepSeek to explain quantum computing"
"Ask DeepSeek Reasoner to solve: If I have 10 apples and buy 5 more..."
Your MCP client will automatically call the deepseek_chat tool.
Manual Configuration (Advanced)
If your MCP client doesn't support the add command, manually add to your config file:
{
"mcpServers": {
"deepseek": {
"command": "npx",
"args": ["@arikusi/deepseek-mcp-server"],
"env": {
"DEEPSEEK_API_KEY": "your-api-key-here"
}
}
}
}
Config file locations:
- Claude Code:
~/.claude.json(add toprojects["your-project-path"].mcpServerssection) - Other MCP clients: Check your client's documentation for config file location
Available Tools
deepseek_chat
Chat with DeepSeek AI models with automatic cost tracking and function calling support.
Parameters:
messages(required): Array of conversation messagesrole: "system" | "user" | "assistant" | "tool"content: Message texttool_call_id(optional): Required for tool role messages
model(optional): "deepseek-chat" (default) or "deepseek-reasoner"temperature(optional): 0-2, controls randomness (default: 1.0). Ignored when thinking mode is enabled.max_tokens(optional): Maximum tokens to generate (deepseek-chat: max 8192, deepseek-reasoner: max 65536)stream(optional): Enable streaming mode (default: false)tools(optional): Array of tool definitions for function calling (max 128)tool_choice(optional): "auto" | "none" | "required" |{type: "function", function: {name: "..."}}thinking(optional): Enable thinking mode{type: "enabled"}json_mode(optional): Enable JSON output mode (supported by both models)session_id(optional): Session ID for multi-turn conversations. Previous context is automatically prepended.
Response includes:
- Content with formatting
- Function call results (if tools were used)
- Request information (tokens, model, cost in USD)
- Structured data with
cost_usdandtool_callsfields
Example:
{
"messages": [
{
"role": "user",
"content": "Explain the theory of relativity in simple terms"
}
],
"model": "deepseek-chat",
"temperature": 0.7,
"max_tokens": 1000
}
DeepSeek Reasoner Example:
{
"messages": [
{
"role": "user",
"content": "If I have 10 apples and eat 3, then buy 5 more, how many do I have?"
}
],
"model": "deepseek-reasoner"
}
The reasoner model will show its thinking process in <thinking> tags followed by the final answer.
Function Calling Example:
{
"messages": [
{
"role": "user",
"content": "What's the weather in Istanbul?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}
When the model decides to call a function, the response includes tool_calls with the function name and arguments. You can then send the result back using a tool role message with the matching tool_call_id.
Thinking Mode Example:
{
"messages": [
{
"role": "user",
"content": "Analyze the time complexity of quicksort"
}
],
"model": "deepseek-chat",
"thinking": { "type": "enabled" }
}
When thinking mode is enabled, temperature, top_p, frequency_penalty, and presence_penalty are automatically ignored.
JSON Output Mode Example:
{
"messages": [
{
"role": "user",
"content": "Return a json object with name, age, and city fields for a sample user"
}
],
"model": "deepseek-chat",
"json_mode": true
}
JSON mode ensures the model outputs valid JSON. Include the word "json" in your prompt for best results. Supported by both deepseek-chat and deepseek-reasoner.
Multi-Turn Session Example:
{
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
],
"session_id": "my-session-1"
}
Use the same session_id across requests to maintain conversation context. The server stores messages in memory and automatically prepends history to each request.
deepseek_sessions
Manage conversation sessions.
Parameters:
action(required): "list" | "clear" | "delete"session_id(optional): Required when action is "delete"
Examples:
{"action": "list"}
{"action": "delete", "session_id": "my-session-1"}
{"action": "clear"}
Available Resources
MCP Resources provide read-only data about the server:
| Resource URI | Description |
|---|---|
deepseek://models |
Available models with capabilities, context limits, and pricing |
deepseek://config |
Current server configuration (API key masked) |
deepseek://usage |
Real-time usage statistics (requests, tokens, costs, sessions) |
Model Fallback & Circuit Breaker
When a model fails with a retryable error (429, 503, timeout), the server automatically falls back to the other model:
deepseek-chatfails → triesdeepseek-reasonerdeepseek-reasonerfails → triesdeepseek-chat
The circuit breaker protects against cascading failures:
- After
CIRCUIT_BREAKER_THRESHOLDconsecutive failures (default: 5), the circuit opens (fast-fail mode) - After
CIRCUIT_BREAKER_RESET_TIMEOUTms (default: 30000), it enters half-open state and sends a probe request - If the probe succeeds, the circuit closes and normal operation resumes
Fallback can be disabled with FALLBACK_ENABLED=false.
Available Prompts
Prompt templates (12 total):
Core Reasoning
- debug_with_reasoning: Debug code with step-by-step analysis
- code_review_deep: Comprehensive code review (security, performance, quality)
- research_synthesis: Research topics and create structured reports
- strategic_planning: Create strategic plans with reasoning
- explain_like_im_five: Explain complex topics in simple terms
Advanced
- mathematical_proof: Prove mathematical statements rigorously
- argument_validation: Analyze arguments for logical fallacies
- creative_ideation: Generate creative ideas with feasibility analysis
- cost_comparison: Compare LLM costs for tasks
- pair_programming: Interactive coding with explanations
Function Calling
- function_call_debug: Debug function calling issues with tool definitions and messages
- create_function_schema: Generate JSON Schema for function calling from natural language
Each prompt is optimized for the DeepSeek Reasoner model to provide detailed reasoning.
Models
Both models run DeepSeek-V3.2 with unified pricing.
deepseek-chat
- Best for: General conversations, coding, content generation
- Speed: Fast
- Context: 128K tokens
- Max Output: 8K tokens (default 4K)
- Mode: Non-thinking (can enable thinking via parameter)
- Features: Thinking mode, JSON mode, function calling, FIM completion
- Pricing: $0.028/1M cache hit, $0.28/1M cache miss, $0.42/1M output
deepseek-reasoner
- Best for: Complex reasoning, math, logic problems, multi-step tasks
- Speed: Slower (shows thinking process)
- Context: 128K tokens
- Max Output: 64K tokens (default 32K)
- Mode: Thinking (always active, chain-of-thought reasoning)
- Features: JSON mode, function calling
- Output: Both reasoning process and final answer
- Pricing: $0.028/1M cache hit, $0.28/1M cache miss, $0.42/1M output
Configuration
The server is configured via environment variables. All settings except DEEPSEEK_API_KEY are optional.
| Variable | Default | Description |
|---|---|---|
DEEPSEEK_API_KEY |
(required) | Your DeepSeek API key |
DEEPSEEK_BASE_URL |
https://api.deepseek.com |
Custom API endpoint |
DEFAULT_MODEL |
deepseek-chat |
Default model for requests |
SHOW_COST_INFO |
true |
Show cost info in responses |
REQUEST_TIMEOUT |
60000 |
Request timeout in milliseconds |
MAX_RETRIES |
2 |
Maximum retry count for failed requests |
SKIP_CONNECTION_TEST |
false |
Skip startup API connection test |
MAX_MESSAGE_LENGTH |
100000 |
Maximum message content length (characters) |
SESSION_TTL_MINUTES |
30 |
Session time-to-live in minutes |
MAX_SESSIONS |
100 |
Maximum number of concurrent sessions |
FALLBACK_ENABLED |
true |
Enable automatic model fallback on errors |
CIRCUIT_BREAKER_THRESHOLD |
5 |
Consecutive failures before circuit opens |
CIRCUIT_BREAKER_RESET_TIMEOUT |
30000 |
Milliseconds before circuit half-opens |
MAX_SESSION_MESSAGES |
200 |
Max messages per session (sliding window) |
ENABLE_MULTIMODAL |
false |
Enable multimodal (image) input support |
TRANSPORT |
stdio |
Transport mode: stdio or http |
HTTP_PORT |
3000 |
HTTP server port (when TRANSPORT=http) |
Example with custom config:
claude mcp add -s user deepseek npx @arikusi/deepseek-mcp-server \
-e DEEPSEEK_API_KEY=your-key \
-e SHOW_COST_INFO=false \
-e REQUEST_TIMEOUT=30000
Development
Project Structure
deepseek-mcp-server/
├── worker/ # Cloudflare Worker (remote BYOK endpoint)
│ ├── src/index.ts # Worker entry point
│ ├── wrangler.toml # Cloudflare config
│ └── package.json
├── src/
│ ├── index.ts # Entry point, bootstrap
│ ├── server.ts # McpServer factory (auto-version)
│ ├── deepseek-client.ts # DeepSeek API wrapper (circuit breaker + fallback)
│ ├── config.ts # Centralized config with Zod validation
│ ├── cost.ts # Cost calculation and formatting
│ ├── schemas.ts # Zod input validation schemas
│ ├── types.ts # TypeScript types + type guards
│ ├── errors.ts # Custom error classes
│ ├── session.ts # In-memory session store (multi-turn)
│ ├── circuit-breaker.ts # Circuit breaker pattern
│ ├── usage-tracker.ts # Usage statistics tracker
│ ├── transport-http.ts # Streamable HTTP transport (Express)
│ ├── tools/
│ │ ├── deepseek-chat.ts # deepseek_chat tool (sessions + fallback)
│ │ ├── deepseek-sessions.ts # deepseek_sessions tool
│ │ └── index.ts # Tool registration aggregator
│ ├── resources/
│ │ ├── models.ts # deepseek://models resource
│ │ ├── config.ts # deepseek://config resource
│ │ ├── usage.ts # deepseek://usage resource
│ │ └── index.ts # Resource registration aggregator
│ └── prompts/
│ ├── core.ts # 5 core reasoning prompts
│ ├── advanced.ts # 5 advanced prompts
│ ├── function-calling.ts # 2 function calling prompts
│ └── index.ts # Prompt registration aggregator
├── dist/ # Compiled JavaScript
├── llms.txt # AI discoverability index
├── llms-full.txt # Full docs for LLM context
├── vitest.config.ts # Test configuration
├── package.json
├── tsconfig.json
└── README.md
Building
npm run build
Watch Mode (for development)
npm run watch
Testing
# Run all tests
npm test
# Watch mode
npm run test:watch
# With coverage report
npm run test:coverage
Testing Locally
# Set API key
export DEEPSEEK_API_KEY="your-key"
# Run the server
npm start
The server will start and wait for MCP client connections via stdio.
Remote Endpoint (Hosted)
A hosted BYOK (Bring Your Own Key) endpoint is available at:
https://deepseek-mcp.tahirl.com/mcp
Send your DeepSeek API key as Authorization: Bearer <key>. No server-side API key stored — your key is used directly per request. Powered by Cloudflare Workers (global edge, zero cold start).
Note: The
deepseek-reasonermodel may take over 30 seconds for complex queries. Some MCP clients (e.g. Claude Code) have built-in tool call timeouts that may interrupt long-running requests. For complex tasks,deepseek-chatis recommended.
# Test health
curl https://deepseek-mcp.tahirl.com/health
# Test MCP (requires auth)
curl -X POST https://deepseek-mcp.tahirl.com/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_KEY" \
-d '{"jsonrpc":"2.0","method":"initialize","params":{"capabilities":{}},"id":1}'
HTTP Transport (Self-Hosted)
Run your own HTTP endpoint:
TRANSPORT=http HTTP_PORT=3000 DEEPSEEK_API_KEY=your-key node dist/index.js
Test the health endpoint:
curl http://localhost:3000/health
The MCP endpoint is available at POST /mcp (Streamable HTTP protocol).
Docker
# Build
docker build -t deepseek-mcp-server .
# Run
docker run -d -p 3000:3000 -e DEEPSEEK_API_KEY=your-key deepseek-mcp-server
# Or use docker-compose
DEEPSEEK_API_KEY=your-key docker compose up -d
The Docker image defaults to HTTP transport on port 3000 with a built-in health check.
Troubleshooting
"DEEPSEEK_API_KEY environment variable is not set"
Option 1: Use the correct installation command
# Make sure to include -e flag with your API key
claude mcp add deepseek npx @arikusi/deepseek-mcp-server -e DEEPSEEK_API_KEY=your-key-here
Option 2: Manually edit the config file
If you already installed without the API key, edit your config file:
- For Claude Code: Open
~/.claude.json(Windows:C:\Users\USERNAME\.claude.json) - Find the
"mcpServers"section under your project path - Add the
envfield with your API key:
"deepseek": {
"type": "stdio",
"command": "npx",
"args": ["@arikusi/deepseek-mcp-server"],
"env": {
"DEEPSEEK_API_KEY": "your-api-key-here"
}
}
- Save and restart Claude Code
"Failed to connect to DeepSeek API"
- Check your API key is valid
- Verify you have internet connection
- Check DeepSeek API status at https://status.deepseek.com
Server not appearing in your MCP client
- Verify the path to
dist/index.jsis correct - Make sure you ran
npm run build - Check your MCP client's logs for errors
- Restart your MCP client completely
Permission Denied on macOS/Linux
Make the file executable:
chmod +x dist/index.js
Publishing to npm
To share this MCP server with others:
- Run
npm login - Run
npm publish --access public
Users can then install with:
npm install -g @arikusi/deepseek-mcp-server
Contributing
Contributions are welcome! Please read our Contributing Guidelines before submitting PRs.
Reporting Issues
Found a bug or have a feature request? Please open an issue using our templates.
Development
# Clone the repo
git clone https://github.com/arikusi/deepseek-mcp-server.git
cd deepseek-mcp-server
# Install dependencies
npm install
# Build in watch mode
npm run watch
# Run tests
npm test
# Lint
npm run lint
Changelog
See CHANGELOG.md for version history and updates.
License
MIT License - see LICENSE file for details
Support
- Documentation
- Bug Reports
- Discussions
- Contact: GitHub Issues
Resources
- DeepSeek Platform - Get your API key
- Model Context Protocol - MCP specification
- DeepSeek API Documentation - API reference
Acknowledgments
- Built with Model Context Protocol SDK
- Uses OpenAI SDK for API compatibility
- Created for the MCP community
Made by @arikusi
This is an unofficial community project and is not affiliated with DeepSeek.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.