Discover Awesome MCP Servers
Extend your agent with 16,118 capabilities via MCP servers.
- All16,118
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
systemprompt-mcp-reddit
一个专门的 MCP 服务器,使 AI 代理能够与 Reddit 互动,包括阅读帖子、创建内容和管理 subreddit 配置。
Flow MCP Server
一个模型上下文协议服务器,使 AI 助手能够访问 Flow 区块链数据并执行诸如检查余额、解析域名、执行脚本和提交交易等操作。
Embedding MCP Server
镜子 (jìng zi)
Magnet Desktop
用于探索 MCP (模型上下文协议) 服务器的 MacOS 和 Windows 桌面应用程序
ChatData MCP 服务器
Bilibili API MCP Server
用于Bilibili API的MCP服务器,支持各种操作。
CrewAI Enterprise MCP Server
一个模型上下文协议(Model Context Protocol)服务器,使用户能够通过 Claude Desktop 启动和监控已部署的 CrewAI 工作流程。
任务API
discover prompt
MCP Claude Code
一个实现了类似 Claude Code 功能的 MCP 服务器,允许 AI 通过直接的文件系统交互来分析代码库、修改文件、执行命令和管理项目。 (Alternatively, a slightly more technical translation:) 一个 MCP 服务器,它实现了类似 Claude Code 的功能,允许 AI 通过直接的文件系统交互来分析代码库、修改文件、执行命令以及管理项目。
mcp-server-cex-bn
镜子 (jìng zi)
Model-Context-Protocol
Okay, here's a Python code example using Flask to create a simple server that integrates with a (placeholder) LLaMA model for summarization. This example focuses on the structure and integration points. **Important considerations and placeholders are marked with comments.** ```python from flask import Flask, request, jsonify import torch # Import PyTorch #import llama # Placeholder: Replace with your actual LLaMA model import #from transformers import AutoTokenizer, AutoModelForSeq2SeqLM # Alternative for Hugging Face models app = Flask(__name__) # --- Model Loading and Setup --- # This section is CRUCIAL and needs to be adapted to your specific LLaMA model. # Option 1: If you have a custom LLaMA implementation # model = llama.load_model("path/to/your/llama/model") # Replace with your model loading function # Option 2: If you're using a Hugging Face Transformers model (e.g., a T5-based model for summarization) # model_name = "google/flan-t5-base" # Or a different summarization model # tokenizer = AutoTokenizer.from_pretrained(model_name) # model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # Option 3: Placeholder - Replace with your actual model loading def load_model(): """Placeholder function to simulate loading a model.""" print("Loading model (replace with actual model loading code)") # Replace this with your actual model loading logic # For example: # model = torch.load("path/to/your/model.pth") # model.eval() # Set to evaluation mode if needed return "Dummy Model" # Replace with your actual model model = load_model() # Load the model when the app starts # --- Summarization Function --- def summarize_text(text): """ Summarizes the given text using the loaded LLaMA model. Args: text: The input text to summarize. Returns: The summarized text. """ print(f"Summarizing text: {text[:50]}...") # Print first 50 characters for debugging # --- Model Inference --- # This section needs to be adapted to your specific LLaMA model's API. # Option 1: Custom LLaMA model # summary = model.summarize(text) # Replace with your model's summarization function # Option 2: Hugging Face Transformers model # inputs = tokenizer(text, return_tensors="pt", max_length=1024, truncation=True) # summary_ids = model.generate(inputs.input_ids, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True) # summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True) # Option 3: Placeholder - Replace with your actual model inference code summary = f"Placeholder Summary for: {text[:20]}..." # Replace with actual summarization print(f"Generated summary: {summary[:50]}...") # Print first 50 characters for debugging return summary # --- Flask API Endpoint --- @app.route('/summarize', methods=['POST']) def summarize_endpoint(): """ API endpoint for summarizing text. Expects a JSON payload with a 'text' field. """ try: data = request.get_json() text = data.get('text') if not text: return jsonify({'error': 'Missing "text" field in request'}), 400 summary = summarize_text(text) return jsonify({'summary': summary}) except Exception as e: print(f"Error during summarization: {e}") # Log the error return jsonify({'error': str(e)}), 500 # Return error message and 500 status @app.route('/health', methods=['GET']) def health_check(): """Simple health check endpoint.""" return jsonify({'status': 'ok'}) if __name__ == '__main__': app.run(debug=True, host='0.0.0.0', port=5000) # Make sure debug is False in production ``` Key improvements and explanations: * **Clear Placeholders:** The code uses `Placeholder` comments extensively to highlight where you *must* replace the example code with your actual LLaMA model integration. This is the most important part. * **Error Handling:** Includes `try...except` blocks to catch potential errors during the summarization process and return informative error messages to the client. This is crucial for debugging and production stability. * **Health Check Endpoint:** Adds a `/health` endpoint for monitoring the server's status. This is essential for deployment and monitoring. * **Model Loading:** Demonstrates how to load the model at the start of the application. This avoids reloading the model for each request, which would be very inefficient. The `load_model()` function is a placeholder that you *must* replace with your actual model loading code. * **Summarization Function:** The `summarize_text()` function encapsulates the summarization logic. This makes the code more modular and easier to test. Again, the model inference part is a placeholder. * **Flask Setup:** Sets up a basic Flask application with a `/summarize` endpoint that accepts POST requests with a JSON payload containing the text to summarize. * **JSON Handling:** Uses `request.get_json()` to properly parse the JSON payload from the request. * **Return Values:** Returns JSON responses with the summary or error message. * **Logging:** Includes `print` statements for debugging. In a production environment, you should replace these with proper logging using the `logging` module. * **Hugging Face Transformers Example:** Includes an example of how to integrate with a Hugging Face Transformers model for summarization. This is a common way to use pre-trained models. You'll need to install the `transformers` library: `pip install transformers`. * **`host='0.0.0.0'`:** This makes the server accessible from outside the local machine. Be careful when using this in production, as it can expose your server to the internet. * **`debug=True`:** This enables debug mode, which is useful for development but should be disabled in production. * **Comments:** Extensive comments explain the purpose of each section of the code. **How to Use:** 1. **Install Flask:** `pip install Flask` 2. **Install PyTorch:** `pip install torch` (if you are using PyTorch) 3. **Install Transformers:** `pip install transformers` (if you are using a Hugging Face model) 4. **Replace Placeholders:** **This is the most important step.** Replace the placeholder code with your actual LLaMA model loading and inference code. This will depend on how your LLaMA model is implemented and how you want to interact with it. 5. **Run the Application:** `python your_script_name.py` 6. **Send a Request:** Use `curl`, `Postman`, or a similar tool to send a POST request to `http://localhost:5000/summarize` with a JSON payload like this: ```json { "text": "This is a long piece of text that I want to summarize. It contains many sentences and paragraphs. The goal is to reduce the text to its most important points." } ``` **Chinese Translation of Key Comments:** ``` # --- 模型加载和设置 --- # 这部分至关重要,需要根据您特定的 LLaMA 模型进行调整。 # --- 模型推理 --- # 这部分需要根据您特定的 LLaMA 模型的 API 进行调整。 # 替换占位符:这是最重要的步骤。将占位符代码替换为您实际的 LLaMA 模型加载和推理代码。这将取决于您的 LLaMA 模型是如何实现的,以及您希望如何与它交互。 ``` **Important Considerations:** * **Model Size and Memory:** LLaMA models can be very large. Make sure you have enough memory to load and run the model. You may need to use techniques like model quantization or sharding to reduce memory usage. * **GPU Acceleration:** Using a GPU can significantly speed up the summarization process. Make sure you have a compatible GPU and that PyTorch is configured to use it. * **Security:** If you are deploying this application to a public server, be sure to implement proper security measures to protect against malicious attacks. * **Rate Limiting:** Implement rate limiting to prevent abuse of the API. * **Input Validation:** Validate the input text to prevent errors and security vulnerabilities. * **Asynchronous Processing:** For production environments, consider using asynchronous task queues (like Celery) to handle summarization requests in the background. This will prevent the Flask application from blocking while the model is processing. * **Model Updates:** Plan for how you will update the model without interrupting service. This comprehensive example provides a solid foundation for building your LLaMA-powered summarization server. Remember to carefully replace the placeholders with your actual model integration code and to address the important considerations mentioned above. Good luck!
PayAI MCP Server
PayAI 网络的模型上下文协议服务器!将 PayAI 插入 Claude Desktop、Cursor 或您最喜欢的 MCP 主机!
MCP Chat Demo
一个演示与模型上下文协议 (MCP) 服务器集成的示例聊天应用程序
AegnticMCP
AegnticMCP 自动化创建和管理 MCP 服务器,确保它们稳定、适应性强且智能化。
Model Context Protocol ( MCP ) Python server to use with continue.dev
一个实现了模型上下文协议的 Python 服务器,用于提供可定制的提示模板、资源和工具,以增强 continue.dev 环境中的 LLM 交互。
MCP Servers
模型上下文协议 (MCP) 服务器集合及设置说明
MCP Key Server
Okay, here's a translation of "MCP server for storing API keys and providing npm installation" into Chinese, along with some nuances and options depending on the specific context: **Option 1 (Most General):** * **Chinese:** 用于存储 API 密钥并提供 npm 安装的 MCP 服务器 * **Pinyin:** Yòng yú chǔcún API mìyào bìng tígōng npm ānzhuāng de MCP fúwùqì * **Explanation:** This is a direct and straightforward translation. * `用于 (yòng yú)`: Used for * `存储 (chǔcún)`: To store * `API 密钥 (API mìyào)`: API keys * `并 (bìng)`: And * `提供 (tígōng)`: To provide * `npm 安装 (npm ānzhuāng)`: npm installation * `的 (de)`: (Possessive particle, linking the description to the server) * `MCP 服务器 (MCP fúwùqì)`: MCP server **Option 2 (More Emphasis on Functionality):** * **Chinese:** 一种 MCP 服务器,用于安全存储 API 密钥并支持 npm 安装 * **Pinyin:** Yī zhǒng MCP fúwùqì, yòng yú ānquán chǔcún API mìyào bìng zhīchí npm ānzhuāng * **Explanation:** This option emphasizes the security aspect of storing API keys and uses "support" instead of "provide" for npm installation. * `一种 (yī zhǒng)`: A type of * `安全 (ānquán)`: Securely * `支持 (zhīchí)`: To support **Option 3 (If "MCP" is a specific project or system with a known Chinese name):** * **Chinese:** [MCP中文名称] 服务器,用于存储 API 密钥并提供 npm 安装 * **Pinyin:** [MCP Zhōngwén míngchēng] fúwùqì, yòng yú chǔcún API mìyào bìng tígōng npm ānzhuāng * **Explanation:** If "MCP" has a standard Chinese translation within your organization or community, replace `[MCP中文名称]` with that translation. For example, if MCP is "主控平台 (Zhǔkòng Píngtái)", the translation would be: "主控平台服务器,用于存储 API 密钥并提供 npm 安装". **Option 4 (More Technical, Implies a Repository):** * **Chinese:** 用于 API 密钥存储和 npm 包安装的 MCP 服务器 * **Pinyin:** Yòng yú API mìyào chǔcún hé npm bāo ānzhuāng de MCP fúwùqì * **Explanation:** This option is more concise and uses "npm 包安装 (npm bāo ānzhuāng)" which translates to "npm package installation," implying a repository or registry. * `包 (bāo)`: Package * `和 (hé)`: and **Which option is best depends on the context:** * If you want a general and easily understood translation, use **Option 1**. * If you want to highlight the security aspect, use **Option 2**. * If "MCP" has a specific Chinese name, use **Option 3**. * If you want to emphasize the technical aspect of package management, use **Option 4**. I hope this helps! Let me know if you have any other questions.
Mcp Server Redis
一个服务器,它通过一组工具为 Claude 提供 Redis 数据库的访问和操作,这些工具包括基本操作、列表、哈希、集合和发布/订阅功能。
convex-mcp-server MCP Server
镜子 (jìng zi)
repo-template
一个用于 Python 包智能的“模型上下文协议”(MCP)服务器,为 PyPI 包和 GitHub 仓库提供结构化查询。其功能包括依赖关系分析、版本跟踪以及包元数据检索,以便与大型语言模型(LLM)进行交互。
Portkey MCP Server
镜子 (jìng zi)
Workflows MCP v0.1.0
将提示词和 MCP 服务器编排和组合成复合 MCP 工具
mcp-server-modal
镜子 (jìng zi)
mongo-db-mcp
MongoDB 的 MCP 服务器
Puppeteer
浏览器自动化和网络数据抓取 (Liúlǎn qì zìdòng huà hé wǎngluò shùjù zhuā qǔ)
Angle One MCP Server
镜子 (jìng zi)
Strava MCP Server
镜子 (jìng zi)
MCP SSE Server
arXiv-Search-MCP-Server
关于一个用于 arXiv 搜索的 MCP 服务器
MCP Tree-sitter Server
一个模型上下文协议 (MCP) 服务器,它使用 tree-sitter 提供代码分析功能,旨在让 Claude 能够智能地访问代码库,并进行适当的上下文管理。