
DeepSeek MCP Server
Allows seamless integration of DeepSeek's language models with MCP-compatible applications like Claude Desktop, supporting features such as model selection, temperature control, and multi-turn conversations with automatic model fallback.
Tools
chat_completion
multi_turn_chat
README
DeepSeek MCP Server
A Model Context Protocol (MCP) server for the DeepSeek API, allowing seamless integration of DeepSeek's powerful language models with MCP-compatible applications like Claude Desktop.
Anonymously use DeepSeek API -- Only a proxy is seen on the other side
<a href="https://glama.ai/mcp/servers/asht4rqltn"><img width="380" height="200" src="https://glama.ai/mcp/servers/asht4rqltn/badge" alt="DeepSeek Server MCP server" /></a>
Installation
Installing via Smithery
To install DeepSeek MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @dmontgomery40/deepseek-mcp-server --client claude
Manual Installation
npm install -g deepseek-mcp-server
Usage with Claude Desktop
Add this to your claude_desktop_config.json
:
{
"mcpServers": {
"deepseek": {
"command": "npx",
"args": [
"-y",
"deepseek-mcp-server"
],
"env": {
"DEEPSEEK_API_KEY": "your-api-key"
}
}
}
}
Features
Note: The server intelligently handles these natural language requests by mapping them to appropriate configuration changes. You can also query the current settings and available models:
- User: "What models are available?"
- Response: Shows list of available models and their capabilities via the models resource.
- User: "What configuration options do I have?"
- Response: Lists all available configuration options via the model-config resource.
- User: "What is the current temperature setting?"
- Response: Displays the current temperature setting.
- User: "Start a multi-turn conversation. With the following settings: model: 'deepseek-chat', make it not too creative, and
allow 8000 tokens."
- Response: Starts a multi-turn conversation with the specified settings.
Automatic Model Fallback if R1 is down
- If the primary model (R1) is down (called
deepseek-reasoner
in the server), the server will automatically attempt to try with v3 (calleddeepseek-chat
in the server)
Note: You can switch back and forth anytime as well, by just giving your prompt and saying "use
deepseek-reasoner
" or "usedeepseek-chat
"
- V3 is recommended for general purpose use, while R1 is recommended for more technical and complex queries, primarily due to speed and token usage
Resource discovery for available models and configurations:
- Custom model selection
- Temperature control (0.0 - 2.0)
- Max tokens limit
- Top P sampling (0.0 - 1.0)
- Presence penalty (-2.0 - 2.0)
- Frequency penalty (-2.0 - 2.0)
Enhanced Conversation Features
Multi-turn conversation support:
- Maintains complete message history and context across exchanges
- Preserves configuration settings throughout the conversation
- Handles complex dialogue flows and follow-up chains automatically
This feature is particularly valuable for two key use cases:
-
Training & Fine-tuning: Since DeepSeek is open source, many users are training their own versions. The multi-turn support provides properly formatted conversation data that's essential for training high-quality dialogue models.
-
Complex Interactions: For production use, this helps manage longer conversations where context is crucial:
- Multi-step reasoning problems
- Interactive troubleshooting sessions
- Detailed technical discussions
- Any scenario where context from earlier messages impacts later responses
The implementation handles all context management and message formatting behind the scenes, letting you focus on the actual interaction rather than the technical details of maintaining conversation state.
Testing with MCP Inspector
You can test the server locally using the MCP Inspector tool:
-
Build the server:
npm run build
-
Run the server with MCP Inspector:
# Make sure to specify the full path to the built server npx @modelcontextprotocol/inspector node ./build/index.js
The inspector will open in your browser and connect to the server via stdio transport. You can:
- View available tools
- Test chat completions with different parameters
- Debug server responses
- Monitor server performance
Note: The server uses DeepSeek's R1 model (deepseek-reasoner) by default, which provides state-of-the-art performance for reasoning and general tasks.
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.