Discover Awesome MCP Servers

Extend your agent with 24,040 capabilities via MCP servers.

All24,040
Metasploit MCP Server

Metasploit MCP Server

Bridges large language models with the Metasploit Framework to enable natural language control over penetration testing workflows. It provides tools for searching modules, executing exploits, generating payloads, and managing active sessions.

macOS GUI Control MCP

macOS GUI Control MCP

Enables comprehensive control over macOS GUI elements including mouse, keyboard, window management, and screen capture. It allows for automated system interactions and script execution while maintaining safety by blocking potentially destructive deletion commands.

Fabric MCP Server

Fabric MCP Server

Provides access to Daniel Miessler's Fabric AI prompts (patterns and strategies) through MCP, automatically syncing with the upstream repository to enable powerful prompt templates in AI workflows.

MyWeight MCP Server

MyWeight MCP Server

A server that connects to the Health Planet API to fetch and provide weight measurement data through any MCP-compatible client, allowing for retrieval and analysis of personal weight records.

Uber External Ads API MCP Server

Uber External Ads API MCP Server

Enables users to manage Uber advertising campaigns through natural language by providing access to Uber's External Ads API. Supports campaign creation, retrieval, updating, and deletion with comprehensive filtering and configuration options.

Bubble MCP

Bubble MCP

Enables AI assistants to interact with Bubble.io applications through the Model Context Protocol for data discovery, CRUD operations, and workflow execution. It provides a standardized interface for managing Bubble database records while respecting privacy rules and security configurations.

Finizi B4B MCP Server

Finizi B4B MCP Server

Enables AI assistants to interact with the Finizi B4B platform through 15 comprehensive tools for managing business entities, invoices, vendors, and products. Features secure JWT authentication, automatic retries, and comprehensive business data operations through natural language commands.

XFetch Mcp

XFetch Mcp

Búsqueda con esteroides. Permite recuperar contenido de cualquier página web, incluso aquellas protegidas por Cloudflare y otros sistemas de seguridad.

Bocha Search MCP

Bocha Search MCP

Un motor de búsqueda centrado en la inteligencia artificial que permite a las aplicaciones de IA acceder a conocimiento de alta calidad proveniente de miles de millones de páginas web y fuentes de contenido del ecosistema en diversos dominios, incluyendo el clima, noticias, enciclopedias, información médica, billetes de tren e imágenes.

Accounting MCP Server

Accounting MCP Server

Enables personal financial management through AI assistants by providing tools to add transactions, check balances, list transaction history, and generate monthly summaries. Supports natural language interaction for tracking income and expenses with categorization.

Build

Build

Okay, I can help you understand how to create different MCP (Model Configuration Protocol) servers using the TypeScript SDK. However, I need a little more information to give you the *most* helpful and specific answer. Please tell me: 1. **Which MCP SDK are you using?** There are several possibilities. For example: * Are you referring to a specific cloud provider's MCP service (e.g., Google Cloud's Model Registry, AWS SageMaker Model Registry, Azure Machine Learning Model Registry)? If so, please specify which one. * Are you using a more general-purpose MCP library or framework? If so, please provide the name of the library/framework. * Are you building your own MCP server from scratch? 2. **What do you mean by "different"?** What aspects of the MCP server do you want to customize or differentiate? For example: * **Different data storage:** Do you want to store model configurations in different databases (e.g., PostgreSQL, MongoDB, a file system)? * **Different authentication/authorization:** Do you need different ways to authenticate users or control access to model configurations? * **Different APIs:** Do you want to expose different endpoints or use a different API style (e.g., REST, gRPC)? * **Different model configuration formats:** Do you want to support different formats for defining model configurations (e.g., JSON, YAML, Protobuf)? * **Different model deployment targets:** Do you want to manage models deployed to different environments (e.g., cloud, edge, on-premise)? * **Different metadata:** Do you want to store different metadata about your models? 3. **What is your current level of understanding?** Are you just starting out, or have you already tried something? Providing code snippets of what you've tried will help me understand your current progress and provide more targeted guidance. **General Concepts and Approaches (Without Specific SDK Information)** Assuming you're building something from scratch or using a more general-purpose library, here's a breakdown of the common elements involved in creating an MCP server and how you might differentiate them: * **Data Model:** The core of an MCP server is the data model that represents model configurations. This typically includes: * Model name/ID * Model version * Model metadata (e.g., description, author, creation date) * Model parameters (e.g., learning rate, batch size) * Model artifacts (e.g., the trained model file, code dependencies) * Deployment information (e.g., target environment, resource requirements) You can differentiate MCP servers by using different data models. For example, one server might focus on deep learning models and store specific information about neural network architectures, while another might focus on traditional machine learning models and store information about feature engineering pipelines. * **Storage Layer:** The storage layer is responsible for persisting model configurations. Common options include: * **Relational databases (e.g., PostgreSQL, MySQL):** Good for structured data and complex queries. * **NoSQL databases (e.g., MongoDB, Cassandra):** Good for flexible schemas and scalability. * **Object storage (e.g., AWS S3, Google Cloud Storage):** Good for storing large model artifacts. * **File system:** Simple but less scalable. You can differentiate MCP servers by using different storage layers. For example, one server might use PostgreSQL for metadata and S3 for model artifacts, while another might use MongoDB for everything. * **API Layer:** The API layer provides an interface for clients to interact with the MCP server. Common options include: * **REST:** A widely used API style based on HTTP. * **gRPC:** A high-performance API style based on Protocol Buffers. * **GraphQL:** A query language for APIs. You can differentiate MCP servers by using different API layers. For example, one server might expose a REST API for easy integration with web applications, while another might expose a gRPC API for high-performance communication between microservices. * **Authentication and Authorization:** These mechanisms control access to the MCP server. Common options include: * **API keys:** Simple but less secure. * **OAuth 2.0:** A widely used standard for delegated authorization. * **Role-based access control (RBAC):** Assigning permissions to roles and then assigning roles to users. You can differentiate MCP servers by using different authentication and authorization mechanisms. For example, one server might use API keys for internal access and OAuth 2.0 for external access. * **Deployment:** How the MCP server is deployed and managed. * **Cloud-based:** Deployed on cloud platforms like AWS, Azure, or GCP. * **On-premise:** Deployed on your own infrastructure. * **Containerized (Docker, Kubernetes):** Provides portability and scalability. You can differentiate MCP servers by deploying them in different environments. **Example (Conceptual - No Specific SDK)** Let's say you want to create two MCP servers: * **MCP Server 1:** For managing TensorFlow models deployed to Google Cloud. It uses PostgreSQL for metadata and Google Cloud Storage for model artifacts. It exposes a REST API and uses Google Cloud IAM for authentication. * **MCP Server 2:** For managing PyTorch models deployed to edge devices. It uses MongoDB for metadata and a local file system for model artifacts. It exposes a gRPC API and uses API keys for authentication. In this case, you would need to: 1. Define different data models for TensorFlow and PyTorch models. 2. Implement different storage layers using PostgreSQL/GCS and MongoDB/file system. 3. Implement different API layers using REST and gRPC. 4. Implement different authentication mechanisms using Google Cloud IAM and API keys. **TypeScript Code Snippet (Illustrative - Requires Specific SDK)** ```typescript // This is a very high-level example and needs to be adapted to your specific SDK. // Example of defining a data model (simplified) interface TensorFlowModelConfig { name: string; version: string; architecture: string; learningRate: number; gcsArtifactPath: string; // Path to the model in Google Cloud Storage } interface PyTorchModelConfig { name: string; version: string; modelDefinition: string; // Path to the model definition file batchSize: number; localArtifactPath: string; // Path to the model on the local file system } // Example of a simplified API endpoint (using Express.js) import express from 'express'; const app = express(); const port = 3000; app.get('/tensorflow/models/:name', (req, res) => { // Logic to retrieve TensorFlow model config from PostgreSQL and GCS const modelName = req.params.name; // ... (Retrieve from database and storage) const modelConfig: TensorFlowModelConfig = { name: modelName, version: "1.0", architecture: "CNN", learningRate: 0.001, gcsArtifactPath: "gs://my-bucket/my-model.pb" }; res.json(modelConfig); }); app.listen(port, () => { console.log(`Example app listening at http://localhost:${port}`); }); ``` **Next Steps** Please provide the information I requested at the beginning of this response (specifically, the MCP SDK you're using and what you mean by "different"). With that information, I can give you much more specific and helpful guidance, including code examples tailored to your situation.

TypeScript MCP Server Boilerplate

TypeScript MCP Server Boilerplate

A boilerplate project for quickly developing Model Context Protocol (MCP) servers using TypeScript SDK, with examples of tools (calculator, greeting) and resources (server info).

WorkItems DevOps MCP Server

WorkItems DevOps MCP Server

Enables LLMs and AI applications to interact with Azure DevOps Work Items, supporting queries, filtering, status updates, date management, effort tracking, descriptions, and comments through natural language.

X MCP Server

X MCP Server

Enables users to interact with X (Twitter) through the X API. Supports posting tweets, retrieving user timelines, searching tweets, and replying to tweets with comprehensive error handling.

LINE Bot MCP Server

LINE Bot MCP Server

Model Context Protocol server implementation that integrates the LINE Messaging API to connect AI agents with LINE Official Accounts, enabling agents to send messages to users.

Genesis MCP Server

Genesis MCP Server

A template for deploying remote MCP servers on Cloudflare Workers without authentication. Provides a foundation for building custom MCP tools that can be accessed from Claude Desktop or the Cloudflare AI Playground.

Display & Video 360 API MCP Server

Display & Video 360 API MCP Server

An MCP server that enables interaction with Google's Display & Video 360 advertising platform API, allowing management of digital advertising campaigns through natural language commands.

Maiga API MCP Server

Maiga API MCP Server

Provides comprehensive integration with the Maiga API for cryptocurrency analysis, including token technicals, social sentiment tracking, and KOL insights. It enables AI assistants to retrieve market reports, trending token data, and detailed on-chain information.

MCP MySQL Server

MCP MySQL Server

Enables interaction with MySQL databases (including AWS RDS and cloud instances) through natural language. Supports database connections, query execution, schema inspection, and comprehensive database management operations.

Html2url

Html2url

Remote MCP Server

Remote MCP Server

A cloud-based custom MCP server using Azure Functions that enables saving and retrieving code snippets with secure communication through keys, HTTPS, OAuth, and network isolation options.

V2.ai Insights Scraper MCP

V2.ai Insights Scraper MCP

A Model Context Protocol server that scrapes blog posts from V2.ai Insights, extracts content, and provides AI-powered summaries using OpenAI's GPT-4.

Claude Agents MCP Server

Claude Agents MCP Server

Centrally manages Claude agent definitions, configurations, and custom commands across multiple devices using a SQLite database, eliminating file synchronization conflicts and enabling live updates across all connected Claude sessions.

MCP Server for MySQL

MCP Server for MySQL

Provides access to MySQL databases with fine-grained access control, supporting multiple databases simultaneously with configurable access modes (readonly, readwrite, full) and table-level permissions using whitelists, blacklists, wildcards, and regex patterns.

Markdown MCP Server

Markdown MCP Server

An MCP (Model Context Protocol) server for efficiently managing Markdown documents in Cursor AI IDE, supporting CRUD operations, search, and metadata management.

MCP with Langchain Sample Setup

MCP with Langchain Sample Setup

Okay, here's a sample setup for a minimal MCP (Message Passing Communication) server and client in Python, designed to be compatible with LangChain. This example focuses on the core communication and doesn't include LangChain-specific logic within the server/client themselves. The idea is that you'd use this communication channel to send data to and from a LangChain agent or chain running on a separate server. **Important Considerations:** * **Simplicity:** This is a basic example. For production, you'd need to add error handling, security (authentication, encryption), more robust message formatting, and potentially asynchronous communication. * **LangChain Integration:** The LangChain part happens *outside* of this code. You'd use the client to send prompts to a LangChain agent running on the server and receive the agent's responses. * **Message Format:** I'm using JSON for simplicity. You could use other formats like Protocol Buffers for better performance and schema validation. * **Threading/Asyncio:** This example uses basic threading. For higher concurrency, consider using `asyncio`. **Code:** ```python import socket import threading import json # Server class MCPServer: def __init__(self, host='localhost', port=12345): self.host = host self.port = port self.server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # Allow reuse of address self.clients = [] # Keep track of connected clients def start(self): self.server_socket.bind((self.host, self.port)) self.server_socket.listen(5) # Listen for up to 5 incoming connections print(f"Server listening on {self.host}:{self.port}") while True: client_socket, addr = self.server_socket.accept() print(f"Accepted connection from {addr}") self.clients.append(client_socket) client_thread = threading.Thread(target=self.handle_client, args=(client_socket,)) client_thread.start() def handle_client(self, client_socket): try: while True: data = client_socket.recv(4096) # Receive up to 4096 bytes if not data: break # Client disconnected try: message = json.loads(data.decode('utf-8')) print(f"Received message: {message}") # **LangChain Integration Point:** # Here, you would pass the 'message' to your LangChain agent/chain # and get a response. For example: # response = my_langchain_agent.run(message['prompt']) # response_message = {'response': response} # self.send_message(client_socket, response_message) # For this example, just echo the message back: self.send_message(client_socket, {"response": f"Server received: {message}"}) except json.JSONDecodeError: print("Received invalid JSON data.") self.send_message(client_socket, {"error": "Invalid JSON"}) except Exception as e: print(f"Error handling client: {e}") finally: print(f"Closing connection with {client_socket.getpeername()}") self.clients.remove(client_socket) client_socket.close() def send_message(self, client_socket, message): try: message_json = json.dumps(message) client_socket.sendall(message_json.encode('utf-8')) except Exception as e: print(f"Error sending message: {e}") def stop(self): for client in self.clients: client.close() self.server_socket.close() print("Server stopped.") # Client class MCPClient: def __init__(self, host='localhost', port=12345): self.host = host self.port = port self.client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) def connect(self): try: self.client_socket.connect((self.host, self.port)) print(f"Connected to server at {self.host}:{self.port}") return True except socket.error as e: print(f"Connection error: {e}") return False def send_message(self, message): try: message_json = json.dumps(message) self.client_socket.sendall(message_json.encode('utf-8')) data = self.client_socket.recv(4096) if data: response = json.loads(data.decode('utf-8')) return response else: return None except Exception as e: print(f"Error sending/receiving message: {e}") return None def close(self): self.client_socket.close() print("Connection closed.") # Example Usage (in separate files or at the end of the same file) if __name__ == "__main__": # Server Example server = MCPServer() server_thread = threading.Thread(target=server.start) server_thread.daemon = True # Allow the main thread to exit even if the server thread is running server_thread.start() # Client Example import time time.sleep(1) # Give the server a moment to start client = MCPClient() if client.connect(): message = {"prompt": "What is the capital of France?"} response = client.send_message(message) if response: print(f"Received response: {response}") else: print("No response received.") client.close() server.stop() # Stop the server after the client is done. ``` **Explanation:** 1. **`MCPServer` Class:** * `__init__`: Initializes the server socket, host, and port. `setsockopt` allows reusing the address, which is helpful for quick restarts. * `start`: Binds the socket, listens for connections, and spawns a new thread for each client that connects. * `handle_client`: This is the core of the server. It receives data from the client, decodes it as JSON, and then (crucially) this is where you would integrate with your LangChain agent or chain. The example code just echoes the message back. It also handles JSON decoding errors. * `send_message`: Encodes a message as JSON and sends it to the client. * `stop`: Closes all client connections and the server socket. 2. **`MCPClient` Class:** * `__init__`: Initializes the client socket, host, and port. * `connect`: Connects to the server. * `send_message`: Encodes a message as JSON, sends it to the server, receives the response, decodes the response as JSON, and returns it. * `close`: Closes the client socket. 3. **Example Usage:** * Creates a server instance and starts it in a separate thread (using `threading.Thread`). The `daemon = True` makes the server thread exit when the main thread exits. * Creates a client instance, connects to the server, sends a message (a prompt), receives the response, and prints the response. * Closes the client connection. * Stops the server. **How to Run:** 1. Save the code as a Python file (e.g., `mcp_example.py`). 2. Run the file from your terminal: `python mcp_example.py` You should see output from both the server and the client. The client will send a message, and the server will echo it back. **LangChain Integration (Conceptual):** The key part is in the `handle_client` method of the `MCPServer` class. Instead of just echoing the message, you would do something like this: ```python # Inside the handle_client method: try: message = json.loads(data.decode('utf-8')) print(f"Received message: {message}") # **LangChain Integration:** from langchain.llms import OpenAI # Or your preferred LLM from langchain.chains import LLMChain from langchain.prompts import PromptTemplate # Example using OpenAI and a simple prompt llm = OpenAI(temperature=0.7, openai_api_key="YOUR_OPENAI_API_KEY") # Replace with your API key prompt_template = PromptTemplate.from_template("{prompt}") chain = LLMChain(llm=llm, prompt=prompt_template) response = chain.run(message['prompt']) # Run the LangChain chain response_message = {'response': response} self.send_message(client_socket, response_message) except json.JSONDecodeError: print("Received invalid JSON data.") self.send_message(client_socket, {"error": "Invalid JSON"}) ``` **Important Notes for LangChain:** * **Install LangChain:** `pip install langchain openai` (or other necessary packages). * **API Keys:** You'll need to set up your API keys for the LLMs you're using (e.g., OpenAI). Don't hardcode them directly in the code; use environment variables or a configuration file. * **Error Handling:** Add more robust error handling around the LangChain calls. * **Prompt Engineering:** The quality of your prompts will greatly affect the results. * **Asynchronous Communication (Advanced):** For high-volume scenarios, consider using `asyncio` for both the server and the client to handle multiple requests concurrently. This will significantly improve performance. **Spanish Translation of Key Concepts:** * **Server:** Servidor * **Client:** Cliente * **Message:** Mensaje * **Prompt:** Indicación, Instrucción * **Response:** Respuesta * **Socket:** Zócalo (although "socket" is often used directly in technical contexts) * **Connection:** Conexión * **Thread:** Hilo * **JSON:** JSON (pronounced the same) * **LangChain Agent:** Agente de LangChain * **LangChain Chain:** Cadena de LangChain * **API Key:** Clave API This comprehensive example should give you a solid foundation for building an MCP server and client that can communicate with a LangChain agent. Remember to adapt the code to your specific needs and add the necessary error handling and security measures.

Hurricane Tracker MCP Server

Hurricane Tracker MCP Server

Provides real-time hurricane tracking, 5-day forecast cones, location-based alerts, and historical storm data from NOAA/NHC through MCP tools for AI assistants.

Sequential Questioning MCP Server

Sequential Questioning MCP Server

A specialized server that enables LLMs to gather specific information through sequential questioning, implementing the MCP standard for seamless integration with LLM clients.

Google Search MCP Server

Google Search MCP Server

A Model Context Protocol server that provides web and image search capabilities through Google's Custom Search API, allowing AI assistants like Claude to access current information from the internet.

Vercel Functions MCP Server Template

Vercel Functions MCP Server Template

A template for deploying MCP servers on Vercel with serverless functions. Includes example tools for rolling dice and fetching weather data to demonstrate basic tool implementation and API integration patterns.