Discover Awesome MCP Servers
Extend your agent with 16,059 capabilities via MCP servers.
- All16,059
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Agent MCP
A Multi-Agent Collaboration Protocol server that enables coordinated AI collaboration through task management, context sharing, and agent interaction visualization.
FastMCP Demo Server
A production-ready MCP server that provides hackathon resources and reusable starter prompts. Built with FastMCP framework and includes comprehensive deployment options for development and production environments.
Outlook MCP Server
Enables interaction with Outlook email through Microsoft Graph API. Supports email management operations like reading, searching, marking as read/unread, and deleting messages through natural language.
Fetch-Save MCP Server
A Model Context Protocol server that enables LLMs to retrieve web content and save it to local files for permanent storage and later access.
CityGML MCP 서버
MCP GDB Server
为 Claude 或其他 AI 助手提供 GDB 调试功能,允许用户通过自然语言管理调试会话、设置断点、检查变量和执行 GDB 命令。
Fugle MCP Server
MCP Geometry Server
An MCP server that enables AI models to generate precise geometric images by providing Asymptote code, supporting both SVG and PNG output formats.
21st.dev Magic AI Agent
A powerful AI-driven tool that helps developers create beautiful, modern UI components instantly through natural language descriptions.
Intervals.icu MCP Server
镜子 (jìng zi)
Icypeas MCP Server
A Model Context Protocol server that integrates with the Icypeas API to help users find work emails based on name and company information.
Comedy MCP Server
Okay, here's a translation of the request "MCP server using C# SDK to enhance comments with jokes from JokeAPI.": **Simplified Chinese:** 使用 C# SDK 的 MCP 服务器,用 JokeAPI 的笑话来增强评论。 **Traditional Chinese:** 使用 C# SDK 的 MCP 伺服器,用 JokeAPI 的笑話來增強評論。 **Explanation of the translation choices:** * **MCP Server:** This is kept as "MCP 服务器/伺服器" as it's likely a specific term related to the project and should be recognizable. If you have more context about what "MCP" stands for, I can provide a more accurate translation. * **C# SDK:** This is kept as "C# SDK" as it's a standard technical term. * **Enhance comments:** "增强评论/增強評論" is a direct and common translation for "enhance comments." * **Jokes from JokeAPI:** "JokeAPI 的笑话/笑話" translates to "jokes from JokeAPI." Again, keeping "JokeAPI" as is since it's a proper noun. **Therefore, the translation means:** A MCP server that uses the C# SDK to add jokes from the JokeAPI to comments.
X MCP Server
Enables users to interact with X (Twitter) through the X API. Supports posting tweets, retrieving user timelines, searching tweets, and replying to tweets with comprehensive error handling.
LINE Bot MCP Server
Model Context Protocol server implementation that integrates the LINE Messaging API to connect AI agents with LINE Official Accounts, enabling agents to send messages to users.
Display & Video 360 API MCP Server
An MCP server that enables interaction with Google's Display & Video 360 advertising platform API, allowing management of digital advertising campaigns through natural language commands.
MCP MySQL Server
Enables interaction with MySQL databases (including AWS RDS and cloud instances) through natural language. Supports database connections, query execution, schema inspection, and comprehensive database management operations.
Meraki Magic MCP
A Python-based MCP server that enables querying Cisco's Meraki Dashboard API to discover, monitor, and manage Meraki environments.
Cursor Rust Tools
一个 MCP 服务器,用于让 Cursor 中的 LLM 访问 Rust Analyzer、Crate 文档和 Cargo 命令。
Html2url
Remote MCP Server
A cloud-based custom MCP server using Azure Functions that enables saving and retrieving code snippets with secure communication through keys, HTTPS, OAuth, and network isolation options.
V2.ai Insights Scraper MCP
A Model Context Protocol server that scrapes blog posts from V2.ai Insights, extracts content, and provides AI-powered summaries using OpenAI's GPT-4.
MCP with Langchain Sample Setup
Okay, here's a sample setup for an MCP (presumably referring to a **Multi-Client Processing** or **Message Communication Protocol**) server and client, designed to be compatible with LangChain. This example focuses on a simple request-response pattern, suitable for offloading LangChain tasks to a separate process or machine. **Important Considerations:** * **Serialization:** LangChain objects can be complex. You'll need a robust serialization/deserialization method (e.g., `pickle`, `json`, `cloudpickle`) to send data between the server and client. `cloudpickle` is often preferred for its ability to handle more complex Python objects, including closures and functions. * **Error Handling:** Implement comprehensive error handling on both the server and client to gracefully manage exceptions and network issues. * **Security:** If you're transmitting data over a network, consider security measures like encryption (e.g., TLS/SSL) to protect sensitive information. * **Asynchronous Operations:** For better performance, especially with LangChain tasks that might be I/O bound, consider using asynchronous programming (e.g., `asyncio`). This example shows a basic synchronous version for clarity. * **Message Format:** Define a clear message format (e.g., JSON with specific keys) for requests and responses. * **LangChain Compatibility:** The key is to serialize the *input* to a LangChain component (like a Chain or LLM) on the client, send it to the server, deserialize it, run the LangChain component on the server, serialize the *output*, and send it back to the client. **Python Code (using `socket` module for simplicity):** **1. Server (server.py):** ```python import socket import pickle # Or json, cloudpickle import langchain import os # Example LangChain setup (replace with your actual chain) from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY" # Replace with your actual API key llm = OpenAI(temperature=0.7) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) HOST = '127.0.0.1' # Standard loopback interface address (localhost) PORT = 65432 # Port to listen on (non-privileged ports are > 1023) def process_langchain_request(data): """ Processes a LangChain request. This is the core logic on the server. """ try: # Deserialize the input (assuming it's a dictionary) input_data = pickle.loads(data) # Or json.loads(data) if using JSON # **Crucially, ensure the input_data matches what your LangChain component expects.** # For example, if your chain expects a dictionary with a "text" key: # input_text = input_data["text"] # Run the LangChain component result = chain.run(input_data["product"]) # Replace with your actual LangChain call # Serialize the result serialized_result = pickle.dumps(result) # Or json.dumps(result) return serialized_result except Exception as e: print(f"Error processing request: {e}") return pickle.dumps({"error": str(e)}) # Serialize the error message with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() print(f"Server listening on {HOST}:{PORT}") conn, addr = s.accept() with conn: print(f"Connected by {addr}") while True: data = conn.recv(4096) # Adjust buffer size as needed if not data: break response = process_langchain_request(data) conn.sendall(response) ``` **2. Client (client.py):** ```python import socket import pickle # Or json, cloudpickle HOST = '127.0.0.1' # The server's hostname or IP address PORT = 65432 # The port used by the server def send_langchain_request(input_data): """ Sends a LangChain request to the server and returns the response. """ try: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) # Serialize the input data serialized_data = pickle.dumps(input_data) # Or json.dumps(input_data) s.sendall(serialized_data) received = s.recv(4096) # Adjust buffer size as needed # Deserialize the response deserialized_response = pickle.loads(received) # Or json.loads(received) return deserialized_response except Exception as e: print(f"Error sending request: {e}") return {"error": str(e)} if __name__ == "__main__": # Example usage input_data = {"product": "eco-friendly cleaning products"} # Replace with your actual input response = send_langchain_request(input_data) if "error" in response: print(f"Error from server: {response['error']}") else: print(f"Server response: {response}") ``` **How to Run:** 1. **Install LangChain:** `pip install langchain openai` 2. **Set your OpenAI API Key:** Replace `"YOUR_API_KEY"` in `server.py` with your actual OpenAI API key. 3. **Run the server:** `python server.py` 4. **Run the client:** `python client.py` **Explanation:** * **Server (`server.py`):** * Creates a socket and listens for incoming connections. * When a client connects, it receives data, deserializes it (using `pickle`), processes it using a LangChain component (in this case, a simple `LLMChain`), serializes the result, and sends it back to the client. * Includes basic error handling. * **Client (`client.py`):** * Creates a socket and connects to the server. * Serializes the input data (using `pickle`), sends it to the server, receives the response, deserializes it, and prints the result. * Includes basic error handling. * **Serialization:** `pickle` (or `json`, `cloudpickle`) is used to convert Python objects into a byte stream that can be sent over the network. The same method must be used for both serialization and deserialization. * **LangChain Integration:** The `process_langchain_request` function on the server is where the LangChain logic resides. It receives the serialized input, deserializes it, runs the LangChain component, and serializes the output. **Key Improvements and Considerations for Production:** * **Asynchronous Communication (using `asyncio`):** Use `asyncio` for non-blocking I/O, allowing the server to handle multiple clients concurrently. This significantly improves performance. * **Message Queues (e.g., RabbitMQ, Redis):** Instead of direct socket connections, use a message queue for more robust and scalable communication. This decouples the client and server and allows for asynchronous processing. * **gRPC:** Consider using gRPC for efficient and type-safe communication between the client and server. gRPC uses Protocol Buffers for serialization, which is generally faster and more compact than `pickle` or `json`. * **Authentication and Authorization:** Implement authentication and authorization to secure the server and prevent unauthorized access. * **Logging:** Use a logging library (e.g., `logging`) to record events and errors for debugging and monitoring. * **Configuration:** Use a configuration file (e.g., YAML, JSON) to store settings like the server address, port, and API keys. * **Monitoring:** Monitor the server's performance and resource usage to identify bottlenecks and potential issues. * **Data Validation:** Validate the input data on both the client and server to prevent errors and security vulnerabilities. * **Retry Logic:** Implement retry logic on the client to handle transient network errors. * **Heartbeat Mechanism:** Implement a heartbeat mechanism to detect and handle server failures. * **Cloudpickle:** For complex LangChain objects, especially those involving custom functions or classes, `cloudpickle` is often necessary to ensure proper serialization and deserialization. Install it with `pip install cloudpickle`. **Example using `cloudpickle`:** ```python # Server (server.py) import cloudpickle def process_langchain_request(data): try: input_data = cloudpickle.loads(data) result = chain.run(input_data["product"]) serialized_result = cloudpickle.dumps(result) return serialized_result except Exception as e: print(f"Error processing request: {e}") return cloudpickle.dumps({"error": str(e)}) # Client (client.py) import cloudpickle def send_langchain_request(input_data): try: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) serialized_data = cloudpickle.dumps(input_data) s.sendall(serialized_data) received = s.recv(4096) deserialized_response = cloudpickle.loads(received) return deserialized_response except Exception as e: print(f"Error sending request: {e}") return {"error": str(e)} ``` This more complete example provides a solid foundation for building a distributed LangChain application. Remember to adapt the code to your specific needs and consider the production-level improvements mentioned above. **Chinese Translation of Key Concepts:** * **MCP (Multi-Client Processing/Message Communication Protocol):** 多客户端处理/消息通信协议 (Duō kèhùduān chǔlǐ/Xiāoxī tōngxìn xiéyì) * **Serialization:** 序列化 (Xùlièhuà) * **Deserialization:** 反序列化 (Fǎn xùlièhuà) * **LangChain:** LangChain (No direct translation, use the English name) * **Socket:** 套接字 (Tàojiēzì) * **Asynchronous:** 异步 (Yìbù) * **Message Queue:** 消息队列 (Xiāoxī duìliè) * **gRPC:** gRPC (No direct translation, use the English name) * **Protocol Buffers:** 协议缓冲区 (Xiéyì huǎnchōngqū) * **Authentication:** 身份验证 (Shēnfèn yànzhèng) * **Authorization:** 授权 (Shòuquán) * **Logging:** 日志记录 (Rìzhì jìlù) * **Cloudpickle:** Cloudpickle (No direct translation, use the English name) This should give you a good starting point. Let me know if you have any more specific questions.
DataForSEO MCP Server
Enables AI assistants to access comprehensive SEO data through DataForSEO APIs, including SERP results, keyword research, backlink analysis, on-page metrics, and domain analytics. Supports real-time search engine data from Google, Bing, and Yahoo with customizable filtering and multiple deployment options.
Continuo Memory System
Enables persistent memory and semantic search for development workflows with hierarchical compression. Store and retrieve development knowledge across IDE sessions using natural language queries, circumventing context window limitations.
MCP Docker Sandbox Interpreter
A secure Docker-based environment that allows AI assistants to safely execute code without direct access to the host system by running all code within isolated containers.
HDFS MCP Server by CData
HDFS MCP Server by CData
Google Search MCP Server
A Model Context Protocol server that provides web and image search capabilities through Google's Custom Search API, allowing AI assistants like Claude to access current information from the internet.
HaloPSA MCP Server
Enables AI assistants to interact with HaloPSA data through secure OAuth2 authentication. Supports SQL queries against the HaloPSA database, API endpoint exploration, and direct API calls for comprehensive PSA data analysis and management.
Gemini MCP Server
A Model Context Protocol server that enables LLMs to perform web searches using Google's Gemini API and return synthesized responses with citations.
Spring AI MCP Weather Server Sample with WebMVC Starter