Discover Awesome MCP Servers
Extend your agent with 14,680 capabilities via MCP servers.
- All14,680
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
MCP Snowflake Reader
A Model Context Protocol (MCP) server that provides secure, read-only access to Snowflake databases. Allows LLMs to safely query tables and describe schemas without write permissions. Built with Python and the official Snowflake SDK.
ghcontext: Supercharge Your LLMs with Real-time GitHub Context
Một máy chủ MCP cung cấp dữ liệu GitHub theo thời gian thực cho các LLM, giúp nâng cao khả năng phát triển phần mềm của chúng.

test
test
OpenAI MCP Example
Dự án này trình bày cách sử dụng giao thức MCP với OpenAI. Nó cung cấp một ví dụ đơn giản để tương tác liền mạch với API của OpenAI thông qua máy chủ và máy khách MCP.
Smartlead MCP Server
Smartlead MCP server.

Medium MCP Server
A browser-based solution that enables programmatic interaction with Medium's content ecosystem despite API deprecation, allowing publishing, retrieving, and searching Medium articles through Claude integration.

46elks MCP Server
Enables AI assistants and MCP-compatible clients to send and manage SMS messages through the 46elks API, leveraging Swedish telecommunications infrastructure.

MCP JSON Database Server
A JSON-based database MCP server with JWT authentication that enables user management, project tracking, department analysis, meeting management, and equipment tracking. Integrates with Claude Desktop to provide secure CRUD operations and analytics through natural language commands.
MCP Server LeetCode
Gương của
mcp-db-server

Crowdlistening
Crowdlistening

mcp-test
test

QuickBooks Online MCP Server by CData
QuickBooks Online MCP Server by CData

Html2url

Remote MCP Server
A cloud-based custom MCP server using Azure Functions that enables saving and retrieving code snippets with secure communication through keys, HTTPS, OAuth, and network isolation options.

V2.ai Insights Scraper MCP
A Model Context Protocol server that scrapes blog posts from V2.ai Insights, extracts content, and provides AI-powered summaries using OpenAI's GPT-4.
MCP with Langchain Sample Setup
Okay, I understand. You want a sample setup for an MCP (presumably referring to a **Message Passing Communication**) Server and Client, designed to be compatible with LangChain. This is a bit of a complex topic, as "MCP" isn't a standard term in the LangChain ecosystem. I'm going to assume you want a way for LangChain components (like agents, chains, etc.) to communicate with each other, potentially across different processes or machines. Here's a breakdown of how you can approach this, along with code examples using Python and a popular message queue library, **RabbitMQ**, which is well-suited for this purpose. I'll also provide alternatives if RabbitMQ isn't the right fit for your needs. **Conceptual Overview** 1. **Message Queue:** We'll use a message queue (RabbitMQ) as the central hub for communication. LangChain components (clients) will publish messages to the queue, and other LangChain components (servers) will subscribe to the queue and process those messages. 2. **Serialization:** LangChain objects (prompts, documents, etc.) need to be serialized (converted to a string format) before being sent over the message queue. We'll use `pickle` or `json` for this. `pickle` can serialize more complex Python objects, but it's generally less secure than `json` if you're dealing with untrusted data. 3. **Asynchronous Communication:** Message queues are inherently asynchronous. The sender doesn't wait for a response. This is ideal for decoupling components and improving performance. 4. **LangChain Integration:** We'll wrap the message queue interactions in functions or classes that make it easy to send and receive LangChain-related data. **Example Implementation (RabbitMQ)** **1. Install Libraries:** ```bash pip install pika langchain ``` **2. RabbitMQ Setup (Important):** * You'll need a RabbitMQ server running. You can install it locally (e.g., using Docker) or use a cloud-based RabbitMQ service. * **Docker (Recommended for Local Development):** ```bash docker run -d -p 5672:5672 -p 15672:15672 rabbitmq:3-management ``` This will start a RabbitMQ server with the management UI accessible at `http://localhost:15672`. The default username/password is `guest/guest`. **Do not use these credentials in production!** **3. Server (Consumer):** ```python import pika import pickle import langchain from langchain.prompts import PromptTemplate import json # RabbitMQ connection parameters RABBITMQ_HOST = 'localhost' # Change if your RabbitMQ is elsewhere RABBITMQ_QUEUE = 'langchain_queue' def process_message(body): """ Processes a message received from the queue. This is where you'd integrate with your LangChain logic. """ try: # Attempt to load as JSON first try: data = json.loads(body) # Example: Assuming the message contains a prompt if "prompt" in data: prompt_text = data["prompt"] print(f"Received prompt: {prompt_text}") # Example LangChain usage: prompt = PromptTemplate.from_template(prompt_text) # You would then use this prompt with an LLM, etc. # For example: # from langchain.llms import OpenAI # llm = OpenAI(api_key="YOUR_API_KEY") # result = llm(prompt.format(input="some input")) # print(f"LLM Result: {result}") return f"Processed prompt: {prompt_text}" # Return a result else: return f"Received JSON data: {data}" except json.JSONDecodeError: # If JSON fails, try pickle data = pickle.loads(body) print(f"Received (pickled) data: {data}") return f"Processed pickled data: {data}" except Exception as e: print(f"Error processing message: {e}") return f"Error: {e}" def callback(ch, method, properties, body): """ Callback function that is called when a message is received. """ print(f" [x] Received {body}") result = process_message(body) print(f" [x] Done. Result: {result}") ch.basic_ack(delivery_tag=method.delivery_tag) # Acknowledge the message def start_server(): """ Starts the RabbitMQ consumer (server). """ connection = pika.BlockingConnection(pika.ConnectionParameters(host=RABBITMQ_HOST)) channel = connection.channel() channel.queue_declare(queue=RABBITMQ_QUEUE, durable=True) # Make the queue durable channel.basic_qos(prefetch_count=1) # Process one message at a time channel.basic_consume(queue=RABBITMQ_QUEUE, on_message_callback=callback) print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() if __name__ == "__main__": start_server() ``` **4. Client (Producer):** ```python import pika import pickle import langchain from langchain.prompts import PromptTemplate import json # RabbitMQ connection parameters RABBITMQ_HOST = 'localhost' # Change if your RabbitMQ is elsewhere RABBITMQ_QUEUE = 'langchain_queue' def send_message(message): """ Sends a message to the RabbitMQ queue. """ connection = pika.BlockingConnection(pika.ConnectionParameters(host=RABBITMQ_HOST)) channel = connection.channel() channel.queue_declare(queue=RABBITMQ_QUEUE, durable=True) channel.basic_publish( exchange='', routing_key=RABBITMQ_QUEUE, body=message, properties=pika.BasicProperties( delivery_mode=2, # Make messages persistent )) print(f" [x] Sent {message}") connection.close() if __name__ == "__main__": # Example 1: Sending a LangChain Prompt (as JSON) prompt = PromptTemplate.from_template("Tell me a joke about {topic}.") prompt_dict = {"prompt": prompt.template} # Extract the template string message_json = json.dumps(prompt_dict) send_message(message_json) # Example 2: Sending a LangChain Prompt (as a pickled object) # message_pickle = pickle.dumps(prompt) # send_message(message_pickle) # Example 3: Sending a simple string # send_message("Hello, LangChain server!") ``` **Explanation:** * **Server (Consumer):** * Connects to RabbitMQ. * Declares the queue (`langchain_queue`). `durable=True` ensures the queue survives RabbitMQ restarts. * Sets `prefetch_count=1` to process one message at a time. This prevents the server from being overwhelmed. * Defines a `callback` function that's executed when a message is received. * The `callback` function deserializes the message (using `pickle` or `json`), processes it (in the `process_message` function), and acknowledges the message (`ch.basic_ack`). Acknowledging the message tells RabbitMQ that the message has been successfully processed and can be removed from the queue. **Important:** If you don't acknowledge the message, RabbitMQ will re-deliver it to another consumer (or the same consumer after a restart). * The `process_message` function is where you'll integrate your LangChain logic. In the example, it receives a prompt, creates a `PromptTemplate` object, and then (theoretically) uses it with an LLM. You'll need to adapt this to your specific use case. * **Client (Producer):** * Connects to RabbitMQ. * Declares the queue (same as the server). * Serializes the LangChain object (using `pickle` or `json`). * Publishes the message to the queue. `delivery_mode=2` makes the messages persistent, meaning they'll survive RabbitMQ restarts. * Closes the connection. **How to Run:** 1. Start the RabbitMQ server (e.g., using Docker). 2. Run the `server.py` script. It will wait for messages. 3. Run the `client.py` script. It will send a message to the queue. 4. The server will receive the message, process it, and print the result. **Important Considerations and Improvements:** * **Error Handling:** The code includes basic error handling, but you should add more robust error handling, especially around network connections and serialization/deserialization. * **Security:** `pickle` is insecure if you're receiving data from untrusted sources. Use `json` whenever possible. If you must use `pickle`, carefully validate the data before deserializing it. Also, use proper authentication and authorization for your RabbitMQ server. **Never use the default `guest/guest` credentials in production.** * **Message Format:** Define a clear message format (e.g., a JSON schema) to ensure that the client and server can communicate effectively. Include a `type` field in your messages to indicate the type of data being sent (e.g., "prompt", "document", "result"). * **Routing:** RabbitMQ supports more advanced routing mechanisms (exchanges, routing keys, bindings) that allow you to send messages to specific consumers based on their content. This can be useful for building more complex systems. * **Asynchronous Client:** For better performance, consider using an asynchronous RabbitMQ client (e.g., using `asyncio` and `aio-pika`). This will allow the client to send messages without blocking the main thread. * **LangChain Callbacks:** LangChain has a callback system that you can use to monitor the progress of chains and agents. You could potentially use the message queue to send callback events to a central monitoring service. * **Alternatives to RabbitMQ:** * **Redis:** Redis is a fast, in-memory data store that can also be used as a message queue. It's simpler to set up than RabbitMQ, but it's not as robust for message persistence. * **ZeroMQ:** ZeroMQ is a high-performance messaging library that's suitable for building distributed applications. It's more complex to use than RabbitMQ, but it offers more flexibility. * **gRPC:** If you need synchronous communication with well-defined interfaces, gRPC is a good choice. It uses Protocol Buffers for serialization and supports multiple languages. * **Celery:** Celery is a distributed task queue that's often used with Django and other Python web frameworks. It can use RabbitMQ or Redis as a broker. * **Serialization Libraries:** Consider using libraries like `marshmallow` or `pydantic` for more robust and type-safe serialization and deserialization. **Example with a more complex LangChain object (Document):** ```python # Client (Producer) - Sending a LangChain Document from langchain.document_loaders import TextLoader from langchain.text_splitter import CharacterTextSplitter # Load a document loader = TextLoader("my_document.txt") # Replace with your file documents = loader.load() # Split the document into chunks text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) # Serialize the first document chunk (for example) doc_to_send = docs[0] doc_dict = { "page_content": doc_to_send.page_content, "metadata": doc_to_send.metadata } message_json = json.dumps(doc_dict) send_message(message_json) # Server (Consumer) - Receiving a LangChain Document def process_message(body): try: data = json.loads(body) if "page_content" in data and "metadata" in data: # Reconstruct the Document object from langchain.docstore.document import Document received_doc = Document(page_content=data["page_content"], metadata=data["metadata"]) print(f"Received Document: {received_doc}") # Now you can use the received_doc with LangChain return "Document received and processed" else: return "Invalid document format" except json.JSONDecodeError: return "Invalid JSON" ``` **Key Improvements in this example:** * **LangChain Document Handling:** Demonstrates how to serialize and deserialize a `Document` object, which is a common data structure in LangChain. * **Clearer Message Structure:** Uses a dictionary to represent the document's content and metadata, making it easier to work with on both the client and server sides. **Choosing the Right Approach** The best approach for your MCP setup depends on your specific requirements: * **Simplicity:** If you just need to send simple strings or numbers, `json` serialization and a basic RabbitMQ setup will suffice. * **Complex Objects:** If you need to send complex LangChain objects (prompts, documents, chains, agents), consider using `pickle` (with caution) or defining custom serialization/deserialization logic using `json` and dictionaries. * **Performance:** For high-performance messaging, explore asynchronous RabbitMQ clients or ZeroMQ. * **Scalability:** RabbitMQ is a good choice for scalable systems. * **Synchronous vs. Asynchronous:** If you need synchronous communication (where the client waits for a response), gRPC might be a better option than a message queue. Remember to adapt the code examples to your specific LangChain use case and to implement proper error handling and security measures. Good luck!

Sequential Questioning MCP Server
A specialized server that enables LLMs to gather specific information through sequential questioning, implementing the MCP standard for seamless integration with LLM clients.

Elasticsearch MCP Server by CData
Elasticsearch MCP Server by CData

Image Converter MCP Server
Enables conversion between multiple image formats including JPG, PNG, WebP, GIF, BMP, TIFF, SVG, ICO, and AVIF with quality control and batch processing capabilities.

Agent MCP
A Multi-Agent Collaboration Protocol server that enables coordinated AI collaboration through task management, context sharing, and agent interaction visualization.

AVA MCP Server
A custom MCP server that provides AI applications with access to an Artificial Virtual Assistant (AVA) toolset, enabling Gmail integration and task management through natural language.

Fetch-Save MCP Server
A Model Context Protocol server that enables LLMs to retrieve web content and save it to local files for permanent storage and later access.

IOL MCP Server
A Model-Controller-Proxy server that acts as an intermediary between clients and the InvertirOnline (IOL) API, providing a simplified interface for portfolio management, stock quotes, and trading operations.

Make.com MCP Server
Enables Claude Desktop to trigger and interact with Make.com automation scenarios through webhooks. Allows users to execute complex workflows and integrations with third-party services like Google Sheets, Notion, and Slack using natural language commands.
CityGML MCP 서버

Enterprise Template Generator
Enables generation of enterprise-grade software templates with built-in GDPR/Swedish compliance validation, workflow automation for platform migrations, and comprehensive template management through domain-driven design principles.

Pub.dev MCP Server
Enables AI assistants to search, analyze, and retrieve detailed information about Dart and Flutter packages from pub.dev. Supports package discovery, version management, dependency analysis, and documentation access.
MCP MongoDB Integration
Dự án này trình bày cách tích hợp MongoDB với Giao thức Ngữ cảnh Mô hình (MCP) để cung cấp cho các trợ lý AI khả năng tương tác với cơ sở dữ liệu.

MCP GDB Server
Cung cấp chức năng gỡ lỗi GDB để sử dụng với Claude hoặc các trợ lý AI khác, cho phép người dùng quản lý các phiên gỡ lỗi, đặt điểm dừng, kiểm tra biến và thực thi các lệnh GDB thông qua ngôn ngữ tự nhiên.