Discover Awesome MCP Servers

Extend your agent with 27,188 capabilities via MCP servers.

All27,188
Slack MCP Server

Slack MCP Server

Enables interaction with Slack workspaces to manage channels, post messages, add reactions, view message history and threads, and retrieve user profiles through the Model Context Protocol.

Interactive Brokers MCP Server

Interactive Brokers MCP Server

Enables real-time stock and options market data retrieval from Interactive Brokers through IB Gateway. Provides stock quotes with price and volume information, plus options quotes with bid, ask, and last price data.

hearthstone-decks-mcp

hearthstone-decks-mcp

A Hearthstone deck parsing server that decodes deck codes into detailed card lists, images, and mana curve statistics. It provides tools for searching specific cards and retrieving metadata via the Model Context Protocol.

MapleStory MCP Server

MapleStory MCP Server

Provides structured access to Nexon's MapleStory Open API, allowing users to query character stats, equipment, Union systems, and guild data. It also enables AI assistants to retrieve game rankings, enhancement probabilities, and official game announcements.

AI Helper MCP Server

AI Helper MCP Server

A server that allows AI agents to consult multiple large language models (like Grok, Gemini, Claude, GPT-4o) through Model Context Protocol for assistance and information.

TypeScript MCP Server Boilerplate

TypeScript MCP Server Boilerplate

A boilerplate project for quickly developing Model Context Protocol servers using TypeScript SDK, with example tools (calculator, greetings) and resources pre-implemented.

Sefaria Jewish Library

Sefaria Jewish Library

Enables Large Language Models to retrieve Jewish texts and commentaries from the Sefaria library through a standardized interface.

HubSpot MCP Server

HubSpot MCP Server

Provides comprehensive access to HubSpot's CRM API, enabling management of contacts, companies, deals, engagements, and associations with support for batch operations and advanced search capabilities.

PagePixels Screenshots MCP Server

PagePixels Screenshots MCP Server

Enables AI assistants to capture website screenshots, automate browser interactions, and manage recurring screenshot configurations across 150+ global locations. It also supports AI-powered domain research and visual change monitoring for any web page.

StockMCP

StockMCP

Provides real-time stock market data and financial analysis through Yahoo Finance integration. Enables users to get quotes, historical prices, fundamentals, dividends, analyst forecasts, and growth projections for any stock symbol.

GitHub MCP Server

GitHub MCP Server

Servidor MCP independiente para la API de GitHub

Solar2D MCP Server

Solar2D MCP Server

Enables AI assistants to run, debug, and interact with Solar2D projects by capturing Lua logs, taking screenshots, and simulating touch events. It streamlines game development through automated simulator configuration and real-time visual analysis.

IBM Informix MCP Server by CData

IBM Informix MCP Server by CData

IBM Informix MCP Server by CData

Localizable XStrings MCP Server

Localizable XStrings MCP Server

Provides tools for working with iOS Localizable.xcstrings files, enabling automated translation workflows and localization management for iOS/macOS projects using Xcode String Catalogs.

Gmail MCP Server

Gmail MCP Server

Here are a few ways to translate "Gmail MCP Server using Java and Spring Boot" into Spanish, depending on the nuance you want to convey: **Option 1 (Most Direct):** * **Servidor MCP de Gmail usando Java y Spring Boot** * This is the most literal translation and is perfectly understandable. It keeps the acronym "MCP" as is, assuming the audience understands it. **Option 2 (Slightly More Explanatory):** * **Servidor MCP de Gmail implementado con Java y Spring Boot** * This translates to "Gmail MCP Server implemented with Java and Spring Boot." The word "implementado" (implemented) adds a bit more clarity. **Option 3 (If you want to avoid the acronym, if possible):** * **Servidor de Gmail con funcionalidad MCP, desarrollado con Java y Spring Boot** * This translates to "Gmail Server with MCP functionality, developed with Java and Spring Boot." This avoids the acronym entirely and focuses on the functionality. You would only use this if you're confident the audience doesn't know what "MCP" stands for. You'd need to define what "MCP" means elsewhere. **Which option is best depends on your audience:** * If your audience is technical and familiar with the acronym "MCP," **Option 1** is the most concise and appropriate. * If you want to be slightly more explicit, **Option 2** is a good choice. * If your audience is less technical or unfamiliar with "MCP," **Option 3** is the safest, but requires you to explain what "MCP" is. Therefore, I recommend **Option 1 (Servidor MCP de Gmail usando Java y Spring Boot)** unless you have a specific reason to use one of the others.

Mermaid MCP Server

Mermaid MCP Server

Enables agents to analyze codebases (local or GitHub) and automatically generate Mermaid diagrams rendered as PNG images via Kroki, providing visual understanding of project structure and flow through file discovery, reading, and diagram generation.

Adobe Commerce Support MCP Server

Adobe Commerce Support MCP Server

Generates professional Adobe Commerce support responses from case findings. Supports both structured and mixed content formats with automatic categorization capabilities.

TiddlyWiki MCP Server

TiddlyWiki MCP Server

Provides AI assistants with access to TiddlyWiki wikis via HTTP API, supporting tiddler management (create, update, delete, search) and semantic search capabilities using Ollama embeddings for natural language queries.

Weather MCP Server

Weather MCP Server

A JavaScript ES Modules server that provides weather information including alerts and forecasts for US locations using the National Weather Service API.

VibeCheck MCP Server

VibeCheck MCP Server

An AI-powered security audit tool that analyzes codebases for vulnerabilities using real-time data from MITRE CWE and npm audit. It enables deep analysis of authentication, API security, and dependencies to provide structured findings and remediation steps.

aica - AI Code Analyzer

aica - AI Code Analyzer

aica (Analizador de Código con IA) revisa tu código usando IA. Compatible con CLI y Acciones de GitHub.

Vercel MCP Template

Vercel MCP Template

A template for deploying MCP servers on Vercel with example tools for rolling dice and fetching weather data. Provides a starting point for building custom MCP servers with TypeScript.

example-mcp-server

example-mcp-server

Okay, I can provide you with an example of what an Anthropic MCP (Model Control Plane) server might look like. It's important to understand that Anthropic doesn't publicly release the *exact* code for their internal infrastructure. However, we can create a simplified, illustrative example that demonstrates the core concepts and functionalities. This example will be a Python-based server using a framework like Flask or FastAPI. It will simulate the key aspects of an MCP, such as: * **Model Registry:** Keeping track of available models, their versions, and metadata. * **Request Routing:** Directing incoming requests to the appropriate model. * **Rate Limiting:** Controlling the usage of models to prevent overload. * **Monitoring/Logging:** Tracking requests, errors, and model performance. * **Authentication/Authorization (Simplified):** Checking if a user/application is allowed to access a model. Here's the example code using FastAPI: ```python from fastapi import FastAPI, HTTPException, Depends, Header from pydantic import BaseModel from typing import Optional, Dict import time import logging import os # Configure logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') app = FastAPI() # --- Model Registry (In-Memory for this example) --- models = { "claude-v1": { "versions": ["1.0", "1.1"], "metadata": {"description": "Claude model version 1"}, "rate_limit": 10, # Requests per minute "last_request_time": 0, }, "claude-v2": { "versions": ["2.0"], "metadata": {"description": "Claude model version 2"}, "rate_limit": 5, "last_request_time": 0, }, } # --- Authentication (Simplified - Replace with a real auth system) --- API_KEY = os.environ.get("ANTHROPIC_API_KEY", "YOUR_DEFAULT_API_KEY") # Get API key from environment variable async def verify_api_key(x_api_key: str = Header(...)): """ Dependency to verify the API key. """ if x_api_key != API_KEY: raise HTTPException(status_code=401, detail="Invalid API Key") return True # --- Request/Response Models --- class Request(BaseModel): model_name: str model_version: str prompt: str parameters: Optional[Dict] = {} class Response(BaseModel): model_name: str model_version: str response: str metadata: Dict # --- MCP Endpoints --- @app.post("/generate", response_model=Response, dependencies=[Depends(verify_api_key)]) async def generate_text(request: Request): """ Endpoint for generating text using a specified model. """ model_name = request.model_name model_version = request.model_version if model_name not in models: raise HTTPException(status_code=404, detail="Model not found") if model_version not in models[model_name]["versions"]: raise HTTPException(status_code=404, detail="Model version not found") # --- Rate Limiting --- current_time = time.time() if current_time - models[model_name]["last_request_time"] < (60 / models[model_name]["rate_limit"]): raise HTTPException(status_code=429, detail="Rate limit exceeded") models[model_name]["last_request_time"] = current_time # --- Model Execution (Simulated) --- try: # In a real system, this would call the actual model. # Here, we simulate the model's response. simulated_response = f"Generated text for model {model_name} version {model_version} with prompt: {request.prompt}" logging.info(f"Model {model_name} processed request.") return Response( model_name=model_name, model_version=model_version, response=simulated_response, metadata={"request_parameters": request.parameters}, ) except Exception as e: logging.error(f"Error processing request: {e}") raise HTTPException(status_code=500, detail=f"Model execution error: {e}") @app.get("/models") async def get_models(): """ Endpoint to retrieve the list of available models and their metadata. """ return models # --- Main --- if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` Key improvements and explanations: * **FastAPI:** Uses FastAPI, a modern, high-performance Python framework for building APIs. It's easier to use and more efficient than Flask for this type of application. * **Pydantic Models:** Uses Pydantic for request and response data validation. This ensures that the data coming in and going out of the API is in the correct format. * **Asynchronous Operations:** Uses `async` and `await` for asynchronous operations. This is crucial for handling multiple requests concurrently and improving performance. In a real-world scenario, model inference would likely be an I/O-bound operation, making asynchronous programming essential. * **Model Registry:** The `models` dictionary simulates a model registry. In a real system, this would likely be a database or a more sophisticated service. It stores information about available models, their versions, and metadata. * **Request Routing:** The `/generate` endpoint handles incoming requests and routes them to the appropriate model based on the `model_name` and `model_version` in the request. * **Rate Limiting:** The code includes a basic rate-limiting mechanism. It tracks the last request time for each model and rejects requests that exceed the configured rate limit. This is essential for preventing overload and ensuring fair usage of the models. * **Authentication:** A simplified API key authentication is implemented using a dependency. **Important:** In a production environment, you would need a much more robust authentication and authorization system (e.g., OAuth 2.0, JWT). The API key is now read from an environment variable. * **Error Handling:** The code includes error handling to catch exceptions during model execution and return appropriate HTTP error codes. * **Logging:** Uses the `logging` module to log requests, errors, and model performance. This is essential for monitoring and debugging the system. * **Model Execution (Simulated):** The code simulates the execution of a model. In a real system, this would involve calling the actual model inference code. The simulated response simply returns a string indicating that the model was executed. * **Dependencies:** Uses FastAPI's dependency injection system for authentication. * **Clearer Structure:** The code is organized into logical sections with comments to explain each part. * **`uvicorn`:** Uses `uvicorn` to run the FastAPI application. `uvicorn` is an ASGI server that is designed for high-performance asynchronous applications. **How to Run the Example:** 1. **Install Dependencies:** ```bash pip install fastapi uvicorn python-dotenv ``` 2. **Set API Key (Important):** * **Option 1 (Recommended):** Set the `ANTHROPIC_API_KEY` environment variable: ```bash export ANTHROPIC_API_KEY="your_secret_api_key" ``` * **Option 2 (Less Secure):** Replace `"YOUR_DEFAULT_API_KEY"` in the code with your desired API key. **Do not commit this to a public repository!** 3. **Run the Server:** ```bash python your_file_name.py # Replace your_file_name.py with the actual name of your file ``` 4. **Test the API:** You can use `curl`, `httpie`, or a tool like Postman to test the API. Here's an example using `curl`: ```bash curl -X POST \ http://localhost:8000/generate \ -H "Content-Type: application/json" \ -H "X-API-Key: your_secret_api_key" \ -d '{ "model_name": "claude-v1", "model_version": "1.0", "prompt": "Write a short story about a cat.", "parameters": {"max_tokens": 100} }' ``` **Important Considerations for a Real-World MCP:** * **Model Deployment:** This example doesn't cover model deployment. In a real system, you would need to deploy the models to a serving infrastructure (e.g., Kubernetes, AWS SageMaker, Google Vertex AI). * **Model Scaling:** You would need to implement mechanisms for scaling the models to handle increasing traffic. This might involve using load balancers, auto-scaling groups, and distributed inference. * **Model Monitoring:** You would need to monitor the performance of the models in real-time. This would involve collecting metrics such as latency, throughput, error rate, and resource utilization. * **A/B Testing:** You would need to support A/B testing of different model versions to determine which versions perform best. * **Security:** Security is paramount. You would need to implement robust authentication, authorization, and data encryption mechanisms. Consider using TLS for all communication. * **Observability:** Implement comprehensive logging, tracing, and metrics collection to understand the behavior of the system and troubleshoot issues. * **CI/CD:** Use a CI/CD pipeline to automate the deployment and testing of new model versions. * **Database:** Replace the in-memory `models` dictionary with a proper database (e.g., PostgreSQL, MySQL, MongoDB) to store model metadata and other configuration data. * **Queueing System:** For asynchronous tasks (like model loading or pre-processing), consider using a queueing system like RabbitMQ or Kafka. This example provides a starting point for building an Anthropic MCP server. You would need to adapt and extend it to meet the specific requirements of your application. Remember to prioritize security, scalability, and observability when building a production-ready system. Also, always refer to the official documentation and best practices for the technologies you are using.

kmux

kmux

A terminal emulator MCP server specifically engineered for LLMs with block-oriented design that organizes command input/output into recognizable blocks and semantic session management. Enables AI to efficiently use terminals for writing code, installing software, and executing commands without context overflow.

Prompt Registry MCP

Prompt Registry MCP

A lightweight, file-based server for managing and serving personal prompt templates with variable substitution support via the Model Context Protocol. It allows users to store, update, and organize prompts in a local directory through integrated MCP tools and CLI assistants.

ShowDoc MCP Server

ShowDoc MCP Server

Automatically fetches API documentation from ShowDoc and generates Android code including Entity classes, Repository patterns, and Retrofit interfaces.

PG_MCP_SERVER

PG_MCP_SERVER

Weather Query MCP Server

Weather Query MCP Server

Una implementación de servidor MCP que permite a los usuarios obtener y mostrar información meteorológica para ciudades específicas, incluyendo temperatura, humedad, velocidad del viento y descripciones del clima.

Web-LLM MCP Server

Web-LLM MCP Server

A server that enables browser-based local LLM inference using Playwright to automate interactions with @mlc-ai/web-llm, supporting text generation, chat sessions, model switching, and status monitoring.

Zoom API MCP Server

Zoom API MCP Server

An MCP Server that enables interaction with Zoom's API through the Multi-Agent Conversation Protocol, allowing users to access and control Zoom's functionality via natural language commands.