Discover Awesome MCP Servers

Extend your agent with 15,578 capabilities via MCP servers.

All15,578
playwright-mcp-fetch

playwright-mcp-fetch

このツールは、ウェブサイトからコンテンツを取得し、Playwright を使用してさまざまな形式に変換するための Model Context Protocol (MCP) サーバーを提供します。

Bilibili API MCP Server

Bilibili API MCP Server

Bilibili API用のMCPサーバー。様々な操作をサポート。

Magnet Desktop

Magnet Desktop

MCP (Model Context Protocol) サーバーを探索するための macOS および Windows デスクトップアプリ

ChatData MCP 服务器

ChatData MCP 服务器

CrewAI Enterprise MCP Server

CrewAI Enterprise MCP Server

ユーザーがClaude Desktopを通じて、デプロイされたCrewAIワークフローを開始および監視できるようにする、モデルコンテキストプロトコルサーバー。

Mcp Server Redis

Mcp Server Redis

ClaudeにRedisデータベースへのアクセスと操作を提供するためのサーバー。基本的な操作、リスト、ハッシュ、セット、およびpub/sub機能のためのツールセットを備えています。

Roo MCP サーバー

Roo MCP サーバー

Createve.AI Nexus: Enabling Real-World AI Agent Deployment 🚀

Createve.AI Nexus: Enabling Real-World AI Agent Deployment 🚀

Createve.AI Nexus Server: 高度なLLMおよびエージェント利用のためのMCPおよびAPIブリッジ

comfy-ui-mcp-server MCP server

comfy-ui-mcp-server MCP server

鏡 (Kagami)

任务API

任务API

discover prompt

mcp-server-cex-bn

mcp-server-cex-bn

鏡 (Kagami)

Model-Context-Protocol

Model-Context-Protocol

Okay, I understand. Here's a Python code example using Flask to create an MCP (presumably meaning a minimal or model-centric application) that integrates a LLaMA model for summarization and serves it via a web API. I'll use the `transformers` library for LLaMA and assume you have a LLaMA model available (e.g., from Hugging Face Model Hub). I'll also include error handling and basic input validation. **Important Considerations:** * **Model Size and Hardware:** LLaMA models can be very large. You'll need sufficient RAM and potentially a GPU to run them efficiently. This code assumes you have the necessary hardware. * **Model Loading:** The code assumes you have a LLaMA model saved locally or accessible via a Hugging Face model name. Adjust the `model_name` variable accordingly. * **API Key/Authentication:** For production environments, you'll want to add authentication (e.g., API keys) to protect your API. This example omits that for simplicity. * **Error Handling:** The error handling is basic. You should enhance it for a production application. * **Rate Limiting:** Implement rate limiting to prevent abuse of your API. * **Dependencies:** Make sure you have the required libraries installed: `pip install flask transformers torch` (or `tensorflow` if you prefer TensorFlow). * **Model License:** Be aware of the license associated with the LLaMA model you are using. ```python from flask import Flask, request, jsonify from transformers import pipeline import torch # Or tensorflow, depending on your setup import logging app = Flask(__name__) # Configure logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') # Model loading (adjust model_name as needed) model_name = "facebook/llama-2-7b-chat-hf" # Example: Replace with your LLaMA model name/path try: # Check if CUDA is available and use it, otherwise use CPU device = "cuda:0" if torch.cuda.is_available() else "cpu" logging.info(f"Using device: {device}") summarizer = pipeline("summarization", model=model_name, device=device) logging.info(f"Model {model_name} loaded successfully.") except Exception as e: logging.error(f"Error loading model: {e}") summarizer = None # Set to None to indicate loading failure # Consider exiting the application if the model fails to load. # raise # Uncomment to stop the app if model loading fails @app.route('/summarize', methods=['POST']) def summarize_text(): """ API endpoint for summarizing text using the LLaMA model. """ if summarizer is None: return jsonify({"error": "Model not loaded. Check server logs."}), 500 try: data = request.get_json() if not data or 'text' not in data: return jsonify({"error": "Missing 'text' field in request body."}), 400 text = data['text'] if not isinstance(text, str) or len(text) == 0: return jsonify({"error": "Invalid 'text' input. Must be a non-empty string."}), 400 # Perform summarization try: summary = summarizer(text, max_length=130, min_length=30, do_sample=False)[0]['summary_text'] # Adjust parameters as needed return jsonify({"summary": summary}) except Exception as e: logging.error(f"Error during summarization: {e}") return jsonify({"error": f"Summarization failed: {e}"}), 500 except Exception as e: logging.error(f"Error processing request: {e}") return jsonify({"error": f"Internal server error: {e}"}), 500 if __name__ == '__main__': app.run(debug=True, host='0.0.0.0', port=5000) # Make sure debug is False in production ``` **Explanation:** 1. **Imports:** Imports necessary libraries (Flask, `transformers`, `torch`, logging). 2. **Flask App Initialization:** Creates a Flask application instance. 3. **Logging:** Configures basic logging to help with debugging. Crucial for production. 4. **Model Loading:** * Sets the `model_name` to the Hugging Face model name or local path of your LLaMA model. **Replace this with your actual model!** * Determines if CUDA (GPU) is available and sets the `device` accordingly. * Uses the `pipeline` from `transformers` to load the model for summarization. This is a high-level API that simplifies model usage. * Includes a `try...except` block to handle potential errors during model loading. If the model fails to load, the `summarizer` variable is set to `None`, and an error message is logged. The application *continues* to run, but the `/summarize` endpoint will return an error. You might want to `raise` the exception to stop the app if model loading is critical. 5. **`/summarize` Endpoint:** * Defines a Flask route `/summarize` that accepts POST requests. * Checks if the `summarizer` is `None` (meaning the model failed to load). If so, it returns an error. * Retrieves the JSON data from the request body using `request.get_json()`. * Validates that the request body contains a `text` field. * Validates that the `text` field is a non-empty string. * Calls the `summarizer` pipeline to generate the summary. The `max_length`, `min_length`, and `do_sample` parameters control the summarization process. Adjust these as needed for your specific model and requirements. `do_sample=False` makes the summarization more deterministic. * Extracts the summary text from the pipeline's output. * Returns the summary in a JSON response. * Includes `try...except` blocks to handle potential errors during request processing and summarization. 6. **`if __name__ == '__main__':` Block:** * Starts the Flask development server. * `debug=True` enables debugging mode (useful during development but should be `False` in production). * `host='0.0.0.0'` makes the server accessible from any IP address (useful for testing on a network). * `port=5000` sets the port number. **How to Run:** 1. **Save:** Save the code as a Python file (e.g., `app.py`). 2. **Install Dependencies:** `pip install flask transformers torch` 3. **Set Model:** **Crucially, replace `"facebook/llama-2-7b-chat-hf"` with the correct model name or path.** 4. **Run:** `python app.py` 5. **Test:** Send a POST request to `http://localhost:5000/summarize` with a JSON body like this: ```json { "text": "This is a long piece of text that I want to summarize. It contains many sentences and paragraphs. The goal is to reduce the text to its most important points while preserving the overall meaning." } ``` You can use `curl`, `Postman`, or any other HTTP client to send the request. **Example using `curl`:** ```bash curl -X POST -H "Content-Type: application/json" -d '{"text": "This is a long piece of text that I want to summarize. It contains many sentences and paragraphs. The goal is to reduce the text to its most important points while preserving the overall meaning."}' http://localhost:5000/summarize ``` **Key Improvements and Best Practices:** * **Error Handling:** More robust error handling is essential for production. Log errors to a file, use more specific exception types, and provide informative error messages to the client. * **Input Validation:** Thoroughly validate all input data to prevent security vulnerabilities and unexpected behavior. * **Security:** Implement authentication (e.g., API keys, JWT) to protect your API. * **Rate Limiting:** Use a rate-limiting library (e.g., Flask-Limiter) to prevent abuse of your API. * **Asynchronous Processing:** For long-running summarization tasks, consider using asynchronous processing (e.g., Celery, Redis Queue) to avoid blocking the Flask server. * **Configuration:** Use environment variables or a configuration file to store sensitive information (e.g., API keys, model paths). * **Monitoring:** Monitor your application's performance and resource usage. * **Model Optimization:** Explore techniques like quantization or pruning to reduce the model's size and improve performance. * **GPU Usage:** Ensure that your model is running on a GPU if available for faster inference. The code includes a check for CUDA. * **Logging:** Use a proper logging framework to record events and errors. * **Documentation:** Document your API using tools like Swagger or OpenAPI. * **Testing:** Write unit tests and integration tests to ensure that your code is working correctly. * **Deployment:** Use a production-ready WSGI server (e.g., Gunicorn, uWSGI) to deploy your application. This comprehensive example provides a solid foundation for building an MCP application with LLaMA for summarization. Remember to adapt it to your specific needs and environment. Good luck!

AegnticMCP

AegnticMCP

AegnticMCPは、MCPサーバーの作成と管理を自動化し、安定性、適応性、そしてインテリジェンスを保証します。

PayAI MCP Server

PayAI MCP Server

PayAIネットワーク用のモデルコンテキストプロトコルサーバーです!PayAIをClaude Desktop、Cursor、またはお気に入りのMCPホストに接続しましょう!

MCP Chat Demo

MCP Chat Demo

Model Context Protocol (MCP) サーバーとの統合を示すサンプルチャットアプリケーション

MCP Servers

MCP Servers

モデルコンテキストプロトコル (MCP) サーバーのコレクションとセットアップ手順

MCP Key Server

MCP Key Server

APIキーを保存し、npmインストールを提供するMCPサーバー

Strava MCP Server

Strava MCP Server

鏡 (Kagami)

convex-mcp-server MCP Server

convex-mcp-server MCP Server

鏡 (Kagami)

MCP SSE Server

MCP SSE Server

arXiv-Search-MCP-Server

arXiv-Search-MCP-Server

arXiv検索のためのMCPサーバーについて

repo-template

repo-template

Pythonパッケージのインテリジェンスのためのモデルコンテキストプロトコル(MCP)サーバー。PyPIパッケージとGitHubリポジトリに対する構造化されたクエリを提供します。LLMとのインタラクションのために、依存関係分析、バージョン追跡、パッケージメタデータの取得などの機能を備えています。

Angle One MCP Server

Angle One MCP Server

鏡 (Kagami)

Workflows MCP v0.1.0

Workflows MCP v0.1.0

プロンプトとMCPサーバーを連携・統合して、複合的なMCPツールを構築する

MCP Servers for Cursor IDE

MCP Servers for Cursor IDE

Portkey MCP Server

Portkey MCP Server

鏡 (Kagami)

Model Context Protocol ( MCP ) Python server to use with continue.dev

Model Context Protocol ( MCP ) Python server to use with continue.dev

Model Context Protocolを実装したPythonサーバー。continue.dev環境におけるLLMとのインタラクションを強化するために、カスタマイズ可能なプロンプトテンプレート、リソース、およびツールを提供します。

MCP Tree-sitter Server

MCP Tree-sitter Server

tree-sitter を使用してコード分析機能を提供する Model Context Protocol (MCP) サーバー。Claude が適切なコンテキスト管理によってコードベースにインテリジェントにアクセスできるように設計されています。

Mcp Mongodb

Mcp Mongodb

MCPサーバー (MCP Sābā)

MCP Server Generator Guide

MCP Server Generator Guide