Discover Awesome MCP Servers

Extend your agent with 29,296 capabilities via MCP servers.

All29,296
mb-mcp

mb-mcp

A standalone Node.js MCP server designed for Memory Bank workflows that enables users to generate project-specific documentation from codebases. It provides tools for creating execution instructions and retrieving structured documentation context, specifically for the iOS stack.

atom-mcp-server

atom-mcp-server

Global price benchmarking for AI inference across 2,600+ SKUs from 47 vendors. Query live pricing, market indexes, and model specs via 8 tools. Free tier available.

paprika-mcp

paprika-mcp

An MCP server that provides read-only access to the Paprika Recipe Manager, allowing users to list and retrieve recipes, grocery items, and meal plans. It enables seamless interaction with recipe details and category information through the Paprika API.

CityGML MCP 서버

CityGML MCP 서버

Hono MCP Sample Server

Hono MCP Sample Server

A sample Model Context Protocol server built with Hono framework that provides weather and news resources, calculator and string reversal tools, and code review prompt templates.

Android Code Search MCP Server

Android Code Search MCP Server

Enables searching and browsing Android source code across projects like Android, AndroidX, and Android Studio via cs.android.com. It provides tools for regex-based code searches, full file content retrieval, and symbol autocomplete suggestions.

MCP Server for MySQL

MCP Server for MySQL

Provides access to MySQL databases with fine-grained access control, supporting multiple databases simultaneously with configurable access modes (readonly, readwrite, full) and table-level permissions using whitelists, blacklists, wildcards, and regex patterns.

Markdown MCP Server

Markdown MCP Server

An MCP (Model Context Protocol) server for efficiently managing Markdown documents in Cursor AI IDE, supporting CRUD operations, search, and metadata management.

MCP with Langchain Sample Setup

MCP with Langchain Sample Setup

Okay, I understand. You want a sample setup for an MCP (presumably referring to a **Message Passing Communication**) Server and Client, designed to be compatible with LangChain. This is a bit of a complex topic, as "MCP" isn't a standard term in the LangChain ecosystem. I'm going to assume you want a way for LangChain components (like agents, chains, etc.) to communicate with each other, potentially across different processes or machines. Here's a breakdown of how you can approach this, along with code examples using Python and a popular message queue library, **RabbitMQ**, which is well-suited for this purpose. I'll also provide alternatives if RabbitMQ isn't the right fit for your needs. **Conceptual Overview** 1. **Message Queue:** We'll use a message queue (RabbitMQ) as the central hub for communication. LangChain components (clients) will publish messages to the queue, and other LangChain components (servers) will subscribe to the queue and process those messages. 2. **Serialization:** LangChain objects (prompts, documents, etc.) need to be serialized (converted to a string format) before being sent over the message queue. We'll use `pickle` or `json` for this. `pickle` can serialize more complex Python objects, but it's generally less secure than `json` if you're dealing with untrusted data. 3. **Asynchronous Communication:** Message queues are inherently asynchronous. The sender doesn't wait for a response. This is ideal for decoupling components and improving performance. 4. **LangChain Integration:** We'll wrap the message queue interactions in functions or classes that make it easy to send and receive LangChain-related data. **Example Implementation (RabbitMQ)** **1. Install Libraries:** ```bash pip install pika langchain ``` **2. RabbitMQ Setup (Important):** * You'll need a RabbitMQ server running. You can install it locally (e.g., using Docker) or use a cloud-based RabbitMQ service. * **Docker (Recommended for Local Development):** ```bash docker run -d -p 5672:5672 -p 15672:15672 rabbitmq:3-management ``` This will start a RabbitMQ server with the management UI accessible at `http://localhost:15672`. The default username/password is `guest/guest`. **Do not use these credentials in production!** **3. Server (Consumer):** ```python import pika import pickle import langchain from langchain.prompts import PromptTemplate import json # RabbitMQ connection parameters RABBITMQ_HOST = 'localhost' # Change if your RabbitMQ is elsewhere RABBITMQ_QUEUE = 'langchain_queue' def process_message(body): """ Processes a message received from the queue. This is where you'd integrate with your LangChain logic. """ try: # Attempt to load as JSON first try: data = json.loads(body) # Example: Assuming the message contains a prompt if "prompt" in data: prompt_text = data["prompt"] print(f"Received prompt: {prompt_text}") # Example LangChain usage: prompt = PromptTemplate.from_template(prompt_text) # You would then use this prompt with an LLM, etc. # For example: # from langchain.llms import OpenAI # llm = OpenAI(api_key="YOUR_API_KEY") # result = llm(prompt.format(input="some input")) # print(f"LLM Result: {result}") return f"Processed prompt: {prompt_text}" # Return a result else: return f"Received JSON data: {data}" except json.JSONDecodeError: # If JSON fails, try pickle data = pickle.loads(body) print(f"Received (pickled) data: {data}") return f"Processed pickled data: {data}" except Exception as e: print(f"Error processing message: {e}") return f"Error: {e}" def callback(ch, method, properties, body): """ Callback function that is called when a message is received. """ print(f" [x] Received {body}") result = process_message(body) print(f" [x] Done. Result: {result}") ch.basic_ack(delivery_tag=method.delivery_tag) # Acknowledge the message def start_server(): """ Starts the RabbitMQ consumer (server). """ connection = pika.BlockingConnection(pika.ConnectionParameters(host=RABBITMQ_HOST)) channel = connection.channel() channel.queue_declare(queue=RABBITMQ_QUEUE, durable=True) # Make the queue durable channel.basic_qos(prefetch_count=1) # Process one message at a time channel.basic_consume(queue=RABBITMQ_QUEUE, on_message_callback=callback) print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() if __name__ == "__main__": start_server() ``` **4. Client (Producer):** ```python import pika import pickle import langchain from langchain.prompts import PromptTemplate import json # RabbitMQ connection parameters RABBITMQ_HOST = 'localhost' # Change if your RabbitMQ is elsewhere RABBITMQ_QUEUE = 'langchain_queue' def send_message(message): """ Sends a message to the RabbitMQ queue. """ connection = pika.BlockingConnection(pika.ConnectionParameters(host=RABBITMQ_HOST)) channel = connection.channel() channel.queue_declare(queue=RABBITMQ_QUEUE, durable=True) channel.basic_publish( exchange='', routing_key=RABBITMQ_QUEUE, body=message, properties=pika.BasicProperties( delivery_mode=2, # Make messages persistent )) print(f" [x] Sent {message}") connection.close() if __name__ == "__main__": # Example 1: Sending a LangChain Prompt (as JSON) prompt = PromptTemplate.from_template("Tell me a joke about {topic}.") prompt_dict = {"prompt": prompt.template} # Extract the template string message_json = json.dumps(prompt_dict) send_message(message_json) # Example 2: Sending a LangChain Prompt (as a pickled object) # message_pickle = pickle.dumps(prompt) # send_message(message_pickle) # Example 3: Sending a simple string # send_message("Hello, LangChain server!") ``` **Explanation:** * **Server (Consumer):** * Connects to RabbitMQ. * Declares the queue (`langchain_queue`). `durable=True` ensures the queue survives RabbitMQ restarts. * Sets `prefetch_count=1` to process one message at a time. This prevents the server from being overwhelmed. * Defines a `callback` function that's executed when a message is received. * The `callback` function deserializes the message (using `pickle` or `json`), processes it (in the `process_message` function), and acknowledges the message (`ch.basic_ack`). Acknowledging the message tells RabbitMQ that the message has been successfully processed and can be removed from the queue. **Important:** If you don't acknowledge the message, RabbitMQ will re-deliver it to another consumer (or the same consumer after a restart). * The `process_message` function is where you'll integrate your LangChain logic. In the example, it receives a prompt, creates a `PromptTemplate` object, and then (theoretically) uses it with an LLM. You'll need to adapt this to your specific use case. * **Client (Producer):** * Connects to RabbitMQ. * Declares the queue (same as the server). * Serializes the LangChain object (using `pickle` or `json`). * Publishes the message to the queue. `delivery_mode=2` makes the messages persistent, meaning they'll survive RabbitMQ restarts. * Closes the connection. **How to Run:** 1. Start the RabbitMQ server (e.g., using Docker). 2. Run the `server.py` script. It will wait for messages. 3. Run the `client.py` script. It will send a message to the queue. 4. The server will receive the message, process it, and print the result. **Important Considerations and Improvements:** * **Error Handling:** The code includes basic error handling, but you should add more robust error handling, especially around network connections and serialization/deserialization. * **Security:** `pickle` is insecure if you're receiving data from untrusted sources. Use `json` whenever possible. If you must use `pickle`, carefully validate the data before deserializing it. Also, use proper authentication and authorization for your RabbitMQ server. **Never use the default `guest/guest` credentials in production.** * **Message Format:** Define a clear message format (e.g., a JSON schema) to ensure that the client and server can communicate effectively. Include a `type` field in your messages to indicate the type of data being sent (e.g., "prompt", "document", "result"). * **Routing:** RabbitMQ supports more advanced routing mechanisms (exchanges, routing keys, bindings) that allow you to send messages to specific consumers based on their content. This can be useful for building more complex systems. * **Asynchronous Client:** For better performance, consider using an asynchronous RabbitMQ client (e.g., using `asyncio` and `aio-pika`). This will allow the client to send messages without blocking the main thread. * **LangChain Callbacks:** LangChain has a callback system that you can use to monitor the progress of chains and agents. You could potentially use the message queue to send callback events to a central monitoring service. * **Alternatives to RabbitMQ:** * **Redis:** Redis is a fast, in-memory data store that can also be used as a message queue. It's simpler to set up than RabbitMQ, but it's not as robust for message persistence. * **ZeroMQ:** ZeroMQ is a high-performance messaging library that's suitable for building distributed applications. It's more complex to use than RabbitMQ, but it offers more flexibility. * **gRPC:** If you need synchronous communication with well-defined interfaces, gRPC is a good choice. It uses Protocol Buffers for serialization and supports multiple languages. * **Celery:** Celery is a distributed task queue that's often used with Django and other Python web frameworks. It can use RabbitMQ or Redis as a broker. * **Serialization Libraries:** Consider using libraries like `marshmallow` or `pydantic` for more robust and type-safe serialization and deserialization. **Example with a more complex LangChain object (Document):** ```python # Client (Producer) - Sending a LangChain Document from langchain.document_loaders import TextLoader from langchain.text_splitter import CharacterTextSplitter # Load a document loader = TextLoader("my_document.txt") # Replace with your file documents = loader.load() # Split the document into chunks text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) # Serialize the first document chunk (for example) doc_to_send = docs[0] doc_dict = { "page_content": doc_to_send.page_content, "metadata": doc_to_send.metadata } message_json = json.dumps(doc_dict) send_message(message_json) # Server (Consumer) - Receiving a LangChain Document def process_message(body): try: data = json.loads(body) if "page_content" in data and "metadata" in data: # Reconstruct the Document object from langchain.docstore.document import Document received_doc = Document(page_content=data["page_content"], metadata=data["metadata"]) print(f"Received Document: {received_doc}") # Now you can use the received_doc with LangChain return "Document received and processed" else: return "Invalid document format" except json.JSONDecodeError: return "Invalid JSON" ``` **Key Improvements in this example:** * **LangChain Document Handling:** Demonstrates how to serialize and deserialize a `Document` object, which is a common data structure in LangChain. * **Clearer Message Structure:** Uses a dictionary to represent the document's content and metadata, making it easier to work with on both the client and server sides. **Choosing the Right Approach** The best approach for your MCP setup depends on your specific requirements: * **Simplicity:** If you just need to send simple strings or numbers, `json` serialization and a basic RabbitMQ setup will suffice. * **Complex Objects:** If you need to send complex LangChain objects (prompts, documents, chains, agents), consider using `pickle` (with caution) or defining custom serialization/deserialization logic using `json` and dictionaries. * **Performance:** For high-performance messaging, explore asynchronous RabbitMQ clients or ZeroMQ. * **Scalability:** RabbitMQ is a good choice for scalable systems. * **Synchronous vs. Asynchronous:** If you need synchronous communication (where the client waits for a response), gRPC might be a better option than a message queue. Remember to adapt the code examples to your specific LangChain use case and to implement proper error handling and security measures. Good luck!

Hurricane Tracker MCP Server

Hurricane Tracker MCP Server

Provides real-time hurricane tracking, 5-day forecast cones, location-based alerts, and historical storm data from NOAA/NHC through MCP tools for AI assistants.

Sequential Questioning MCP Server

Sequential Questioning MCP Server

A specialized server that enables LLMs to gather specific information through sequential questioning, implementing the MCP standard for seamless integration with LLM clients.

Spotinst MCP Server

Spotinst MCP Server

An MCP server for the Spot.io API that enables management of AWS and Azure Ocean clusters across multiple accounts. It provides tools for cluster inventory, node management, cost analysis, and scaling operations through natural language.

Plasmate

Plasmate

Agent-native headless browser for AI agents. Converts web pages to a Semantic Object Model (SOM) instead of raw HTML — 17x average token reduction across real-world sites (up to 117x on complex pages). Native MCP server with fetch_page, extract_text, extract_links, and full browser automation. No API key required.

Vora

Vora

First Voice AI MCP for AI Agents

PinePaper MCP Server

PinePaper MCP Server

Enables AI assistants to create and animate graphics in PinePaper Studio using natural language, supporting text, shapes, behavior-driven animations, procedural backgrounds, and SVG export.

WuWa MCP Server

WuWa MCP Server

Enables querying detailed information about characters, echoes, and character profiles from the Wuthering Waves game, returning results in LLM-optimized Markdown format.

Html2url

Html2url

Remote MCP Server

Remote MCP Server

A cloud-based custom MCP server using Azure Functions that enables saving and retrieving code snippets with secure communication through keys, HTTPS, OAuth, and network isolation options.

codewiki-mcp

codewiki-mcp

MCP server for codewiki.google — AI-powered wiki documentation for open-source repositories. Search repos, fetch wiki content, and ask questions about any repo.

V2.ai Insights Scraper MCP

V2.ai Insights Scraper MCP

A Model Context Protocol server that scrapes blog posts from V2.ai Insights, extracts content, and provides AI-powered summaries using OpenAI's GPT-4.

Claude Agents MCP Server

Claude Agents MCP Server

Centrally manages Claude agent definitions, configurations, and custom commands across multiple devices using a SQLite database, eliminating file synchronization conflicts and enabling live updates across all connected Claude sessions.

Conclave MCP

Conclave MCP

Provides access to multiple frontier LLM models (GPT, Claude, Gemini, Grok, DeepSeek) for consulting a "conclave" of AI perspectives, enabling peer-ranked evaluations and synthesized consensus answers for important decisions.

Puppeteer Real Browser MCP Server

Puppeteer Real Browser MCP Server

A Model Context Protocol server that enables AI assistants to control a real web browser with stealth capabilities, avoiding bot detection while performing tasks like clicking, filling forms, taking screenshots, and extracting data.

Kunobi MCP Server

Kunobi MCP Server

Exposes Kunobi's application data and store querying capabilities to AI assistants by bridging stdio to Kunobi's local HTTP MCP endpoint. It features dynamic tool discovery and automatic reconnection to ensure tools like query_store and app_info are always available when Kunobi is running.

MCP Developer Server

MCP Developer Server

Provides instant access to 700+ programming documentation sources and creates isolated Docker containers for safe code testing and experimentation. Combines comprehensive documentation lookup with containerized development environments for enhanced development workflows.

EduChain MCP Server

EduChain MCP Server

Enables AI-powered generation of educational content including multiple-choice questions, lesson plans, and flashcards on any topic through integration with the EduChain library and DeepSeek model via OpenRouter.

Netlify MCP Server

Netlify MCP Server

Enables code agents to interact with Netlify services through the Model Context Protocol, allowing them to create, build, deploy, and manage Netlify resources using natural language prompts.

MCP Personal Assistant

MCP Personal Assistant

A comprehensive personal productivity server that manages projects, todos, calendar events, documents, and status tracking with support for encrypted storage and multiple database backends.

reqable-mcp

reqable-mcp

Read-only MCP server that brings Reqable's captured HTTP traffic into Claude Code conversations, enabling querying, searching, waiting for, and analyzing requests without requiring Reqable Pro.

Oura MCP Server

Oura MCP Server

Enables AI assistants to access Oura Ring health data including sleep, readiness, activity, heart rate, and workouts through OAuth2-authenticated API calls with automatic token refresh and smart caching.