Discover Awesome MCP Servers

Extend your agent with 29,072 capabilities via MCP servers.

All29,072
Zero Network MCP Server

Zero Network MCP Server

Provides AI agents with access to Zero Network documentation, SDK integration guides, and utility tools for crypto-based payments. It enables developers to implement x402 paywalls and per-tool MCP pricing while offering real-time cost estimations and revenue calculations.

Obsidian MCP Server

Obsidian MCP Server

Enables MCP clients to interact with Obsidian vaults via filesystem operations and optional REST API integration for advanced UI commands. It features multi-vault auto-discovery, concurrent-safe file handling, and comprehensive tools for searching, reading, and managing vault content.

MCP GDB Server

MCP GDB Server

Fornece funcionalidade de depuração GDB para uso com Claude ou outros assistentes de IA, permitindo que os usuários gerenciem sessões de depuração, definam pontos de interrupção, examinem variáveis e executem comandos GDB por meio de linguagem natural.

IoEHub MQTT MCP 서버

IoEHub MQTT MCP 서버

Fugle MCP Server

Fugle MCP Server

ArxivAutoJob

ArxivAutoJob

Este repositório apenas coleta artigos do Arxiv com [arxiv_mcp_project](... - *inserir o restante do link aqui, se disponível*).

MCP Geometry Server

MCP Geometry Server

An MCP server that enables AI models to generate precise geometric images by providing Asymptote code, supporting both SVG and PNG output formats.

MCP Server for Mem.ai

MCP Server for Mem.ai

Enables AI assistants to intelligently save, organize, and retrieve content through Mem.ai's knowledge management platform. Supports creating notes, collections, and AI-powered content processing with automatic organization.

TypeScript MCP Server Boilerplate

TypeScript MCP Server Boilerplate

A boilerplate project for quickly developing Model Context Protocol (MCP) servers using TypeScript SDK, with examples of tools (calculator, greeting) and resources (server info).

mysql-mcp

mysql-mcp

A lightweight MCP server providing safe, read-only access to MySQL databases. It enables users to query multiple MySQL instances securely while preventing write operations.

Remote MCP Server Authless

Remote MCP Server Authless

A simple way to deploy an authentication-free Model Context Protocol server on Cloudflare Workers that can be connected to AI tools like Claude Desktop or the Cloudflare AI Playground.

X MCP Server

X MCP Server

Enables users to interact with X (Twitter) through the X API. Supports posting tweets, retrieving user timelines, searching tweets, and replying to tweets with comprehensive error handling.

Genesis MCP Server

Genesis MCP Server

A template for deploying remote MCP servers on Cloudflare Workers without authentication. Provides a foundation for building custom MCP tools that can be accessed from Claude Desktop or the Cloudflare AI Playground.

Display & Video 360 API MCP Server

Display & Video 360 API MCP Server

An MCP server that enables interaction with Google's Display & Video 360 advertising platform API, allowing management of digital advertising campaigns through natural language commands.

database

database

Database MCP server for MySQL, MariaDB, PostgreSQL & SQLite

Track-It Process Monitor

Track-It Process Monitor

Enables Claude to monitor and inspect running processes through a lightweight wrapper that captures stdout/stderr logs and stores process metadata in SQLite. Users can track any command execution and query process status, logs, and history through natural language.

Maiga API MCP Server

Maiga API MCP Server

Provides comprehensive integration with the Maiga API for cryptocurrency analysis, including token technicals, social sentiment tracking, and KOL insights. It enables AI assistants to retrieve market reports, trending token data, and detailed on-chain information.

MCP MySQL Server

MCP MySQL Server

Enables interaction with MySQL databases (including AWS RDS and cloud instances) through natural language. Supports database connections, query execution, schema inspection, and comprehensive database management operations.

Meraki Magic MCP

Meraki Magic MCP

A Python-based MCP server that enables querying Cisco's Meraki Dashboard API to discover, monitor, and manage Meraki environments.

Cursor Rust Tools

Cursor Rust Tools

Um servidor MCP para permitir que o LLM no Cursor acesse o Rust Analyzer, a documentação de Crate e os comandos Cargo.

MCP Prompt Optimizer

MCP Prompt Optimizer

This MCP server provides research-backed prompt optimization tools and professional domain templates designed to improve AI performance through strategies like Tree of Thoughts and Medprompt. It enables users to analyze, auto-optimize, and refine prompts using advanced reasoning patterns and safety-critical alignment techniques.

random-number-server

random-number-server

An MCP server that generates random numbers by using national weather data as entropy seeds. It provides a unique way to generate random values through weather API integration within the Model Context Protocol.

Amazon Business Integrations MCP Server

Amazon Business Integrations MCP Server

Provides AI-enabled access to Amazon Business API documentation, sample code, and troubleshooting resources. Enables developers to search and retrieve API documentation, generate integration code, and get guided solutions for common errors during the API integration process.

PinePaper MCP Server

PinePaper MCP Server

Enables AI assistants to create and animate graphics in PinePaper Studio using natural language, supporting text, shapes, behavior-driven animations, procedural backgrounds, and SVG export.

Markdown MCP Server

Markdown MCP Server

An MCP (Model Context Protocol) server for efficiently managing Markdown documents in Cursor AI IDE, supporting CRUD operations, search, and metadata management.

MCP with Langchain Sample Setup

MCP with Langchain Sample Setup

Okay, here's a sample setup for an MCP (Modular Component Protocol) server and client, designed to be compatible with LangChain. This example focuses on a simple "summarization" task, but you can adapt it to other LangChain functionalities. **Important Considerations:** * **MCP (Modular Component Protocol):** MCP isn't a widely standardized protocol. This example uses a simplified, custom implementation based on JSON over HTTP for demonstration purposes. In a real-world scenario, you might consider more robust solutions like gRPC, Thrift, or even well-defined REST APIs. * **LangChain Integration:** The key is to use LangChain components (e.g., LLMs, chains, document loaders) *within* the MCP server to process requests. The client sends data, the server uses LangChain to process it, and the server returns the result. * **Error Handling:** This is a simplified example. Robust error handling (try-except blocks, logging, proper HTTP status codes) is crucial in a production environment. * **Security:** This example lacks security measures (authentication, authorization). Implement appropriate security based on your needs. * **Asynchronous Operations:** For more complex tasks, consider using asynchronous operations (e.g., `asyncio` in Python) to improve performance and prevent blocking. **Python Code (Example):** **1. MCP Server (using Flask):** ```python from flask import Flask, request, jsonify from langchain.llms import OpenAI from langchain.chains.summarize import load_summarize_chain from langchain.document_loaders import TextLoader # Or other loaders from langchain.text_splitter import CharacterTextSplitter import os # Set your OpenAI API key (or use environment variables) os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" # Replace with your actual key app = Flask(__name__) @app.route('/summarize', methods=['POST']) def summarize_text(): try: data = request.get_json() text = data.get('text') if not text: return jsonify({'error': 'Missing "text" parameter'}), 400 # LangChain components llm = OpenAI(temperature=0) # Adjust temperature as needed summarize_chain = load_summarize_chain(llm, chain_type="map_reduce") # or "stuff", "refine" # Prepare the document (using a dummy TextLoader for demonstration) # In a real scenario, you might load from a file, database, etc. # For large texts, split into chunks text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(text) # Create LangChain documents from the text chunks from langchain.docstore.document import Document docs = [Document(page_content=t) for t in texts] # Run the summarization chain summary = summarize_chain.run(docs) return jsonify({'summary': summary}) except Exception as e: print(f"Error: {e}") # Log the error return jsonify({'error': str(e)}), 500 # Return error with status code if __name__ == '__main__': app.run(debug=True, host='0.0.0.0', port=5000) # Make accessible on network ``` **2. MCP Client (using `requests`):** ```python import requests import json def summarize_with_mcp(text, server_url="http://localhost:5000/summarize"): """ Sends text to the MCP server for summarization. Args: text: The text to summarize. server_url: The URL of the MCP server's summarize endpoint. Returns: The summary from the server, or None if there was an error. """ try: payload = {'text': text} headers = {'Content-Type': 'application/json'} response = requests.post(server_url, data=json.dumps(payload), headers=headers) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) data = response.json() return data.get('summary') except requests.exceptions.RequestException as e: print(f"Error connecting to server: {e}") return None except json.JSONDecodeError as e: print(f"Error decoding JSON response: {e}") return None except Exception as e: print(f"An unexpected error occurred: {e}") return None if __name__ == '__main__': sample_text = """ This is a long piece of text that needs to be summarized. It contains important information about LangChain and MCP. LangChain is a powerful framework for building applications using large language models. MCP, in this context, is a simple protocol for communication between a client and a server. The server uses LangChain to process requests from the client. This example demonstrates a basic summarization task. More complex tasks can be implemented using different LangChain components and chains. Error handling and security are important considerations for production deployments. """ summary = summarize_with_mcp(sample_text) if summary: print("Summary:", summary) else: print("Failed to get summary.") ``` **Explanation:** * **Server (Flask):** * Uses Flask to create a simple HTTP server. * The `/summarize` endpoint receives POST requests with a JSON payload containing the `text` to summarize. * It initializes LangChain components: `OpenAI` (the LLM) and `load_summarize_chain` (the summarization chain). You'll need an OpenAI API key. * It loads the text into a LangChain `Document`. For larger texts, it splits the text into chunks using `CharacterTextSplitter`. * It runs the summarization chain and returns the summary in a JSON response. * Includes basic error handling. * **Client:** * Uses the `requests` library to send a POST request to the server's `/summarize` endpoint. * It packages the text to be summarized in a JSON payload. * It handles potential errors during the request (e.g., connection errors, bad responses). * It prints the summary received from the server. **How to Run:** 1. **Install Dependencies:** ```bash pip install flask langchain openai requests tiktoken ``` 2. **Set OpenAI API Key:** Replace `"YOUR_OPENAI_API_KEY"` in the server code with your actual OpenAI API key. Consider using environment variables for security. 3. **Run the Server:** ```bash python your_server_file.py # e.g., python mcp_server.py ``` 4. **Run the Client:** ```bash python your_client_file.py # e.g., python mcp_client.py ``` **Key Adaptations for Different LangChain Tasks:** * **Different Chains:** Instead of `load_summarize_chain`, use other LangChain chains (e.g., `LLMChain`, `ConversationalRetrievalChain`) based on the task you want to perform. * **Different LLMs:** Use other LLMs besides `OpenAI` (e.g., `HuggingFaceHub`, `Cohere`). You'll need to install the appropriate LangChain integration and configure the LLM. * **Data Loading:** Use different LangChain document loaders (e.g., `WebBaseLoader`, `CSVLoader`, `PDFMinerLoader`) to load data from various sources. * **Input/Output:** Adjust the input and output data formats in the server and client to match the requirements of your task. For example, you might send a question and receive an answer, or send a list of documents and receive a ranked list of relevant documents. * **Prompt Engineering:** Carefully design the prompts used in your LangChain chains to achieve the desired results. **Example in Portuguese (Translation of the Explanation):** Aqui está uma configuração de exemplo para um servidor e cliente MCP (Modular Component Protocol), projetada para ser compatível com LangChain. Este exemplo se concentra em uma tarefa simples de "resumo", mas você pode adaptá-lo para outras funcionalidades do LangChain. **Considerações Importantes:** * **MCP (Modular Component Protocol):** MCP não é um protocolo amplamente padronizado. Este exemplo usa uma implementação personalizada simplificada baseada em JSON sobre HTTP para fins de demonstração. Em um cenário do mundo real, você pode considerar soluções mais robustas como gRPC, Thrift ou até mesmo APIs REST bem definidas. * **Integração com LangChain:** A chave é usar componentes LangChain (por exemplo, LLMs, chains, carregadores de documentos) *dentro* do servidor MCP para processar solicitações. O cliente envia dados, o servidor usa LangChain para processá-los e o servidor retorna o resultado. * **Tratamento de Erros:** Este é um exemplo simplificado. O tratamento robusto de erros (blocos try-except, registro, códigos de status HTTP adequados) é crucial em um ambiente de produção. * **Segurança:** Este exemplo carece de medidas de segurança (autenticação, autorização). Implemente a segurança apropriada com base em suas necessidades. * **Operações Assíncronas:** Para tarefas mais complexas, considere usar operações assíncronas (por exemplo, `asyncio` em Python) para melhorar o desempenho e evitar bloqueios. **Código Python (Exemplo):** (O código Python permaneceria o mesmo, pois é código e não precisa ser traduzido. Apenas a explicação é traduzida.) **Explicação:** * **Servidor (Flask):** * Usa Flask para criar um servidor HTTP simples. * O endpoint `/summarize` recebe solicitações POST com um payload JSON contendo o `text` a ser resumido. * Ele inicializa os componentes LangChain: `OpenAI` (o LLM) e `load_summarize_chain` (a chain de resumo). Você precisará de uma chave de API OpenAI. * Ele carrega o texto em um `Document` LangChain. Para textos maiores, ele divide o texto em partes usando `CharacterTextSplitter`. * Ele executa a chain de resumo e retorna o resumo em uma resposta JSON. * Inclui tratamento de erros básico. * **Cliente:** * Usa a biblioteca `requests` para enviar uma solicitação POST para o endpoint `/summarize` do servidor. * Ele empacota o texto a ser resumido em um payload JSON. * Ele lida com possíveis erros durante a solicitação (por exemplo, erros de conexão, respostas ruins). * Ele imprime o resumo recebido do servidor. **Como Executar:** (As instruções de execução permanecem as mesmas, pois são comandos e não precisam ser traduzidas.) **Principais Adaptações para Diferentes Tarefas LangChain:** * **Chains Diferentes:** Em vez de `load_summarize_chain`, use outras chains LangChain (por exemplo, `LLMChain`, `ConversationalRetrievalChain`) com base na tarefa que você deseja executar. * **LLMs Diferentes:** Use outros LLMs além de `OpenAI` (por exemplo, `HuggingFaceHub`, `Cohere`). Você precisará instalar a integração LangChain apropriada e configurar o LLM. * **Carregamento de Dados:** Use diferentes carregadores de documentos LangChain (por exemplo, `WebBaseLoader`, `CSVLoader`, `PDFMinerLoader`) para carregar dados de várias fontes. * **Entrada/Saída:** Ajuste os formatos de dados de entrada e saída no servidor e no cliente para corresponder aos requisitos de sua tarefa. Por exemplo, você pode enviar uma pergunta e receber uma resposta, ou enviar uma lista de documentos e receber uma lista classificada de documentos relevantes. * **Engenharia de Prompt:** Projete cuidadosamente os prompts usados em suas chains LangChain para obter os resultados desejados. This provides a basic framework. Remember to adapt it to your specific use case and add proper error handling, security, and performance optimizations. Good luck!

Hurricane Tracker MCP Server

Hurricane Tracker MCP Server

Provides real-time hurricane tracking, 5-day forecast cones, location-based alerts, and historical storm data from NOAA/NHC through MCP tools for AI assistants.

Spotinst MCP Server

Spotinst MCP Server

An MCP server for the Spot.io API that enables management of AWS and Azure Ocean clusters across multiple accounts. It provides tools for cluster inventory, node management, cost analysis, and scaling operations through natural language.

Plasmate

Plasmate

Agent-native headless browser for AI agents. Converts web pages to a Semantic Object Model (SOM) instead of raw HTML — 17x average token reduction across real-world sites (up to 117x on complex pages). Native MCP server with fetch_page, extract_text, extract_links, and full browser automation. No API key required.

SpinQit MCP Tools

SpinQit MCP Tools

Enables AI models to invoke SpinQ's quantum computing hardware resources by creating and submitting quantum circuits (QASM) to the SpinQ cloud platform for execution and results retrieval.