Discover Awesome MCP Servers
Extend your agent with 23,601 capabilities via MCP servers.
- All23,601
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Vercel MCP Template
A template for deploying MCP servers on Vercel with example tools for rolling dice and fetching weather data. Provides a starting point for building custom MCP servers with TypeScript.
example-mcp-server
```python import asyncio import json import os from typing import AsyncGenerator, Dict, List, Optional from fastapi import FastAPI, HTTPException, Request, status from fastapi.middleware.cors import CORSMiddleware from pydantic import BaseModel, Field # --- Configuration --- API_KEY = os.environ.get("ANTHROPIC_API_KEY") # Get API key from environment variable if not API_KEY: print("Warning: ANTHROPIC_API_KEY environment variable not set. This server will not function.") MODEL_NAME = os.environ.get("ANTHROPIC_MODEL_NAME", "claude-3-opus-20240229") # Default model MAX_TOKENS = int(os.environ.get("ANTHROPIC_MAX_TOKENS", "2048")) # Default max tokens TEMPERATURE = float(os.environ.get("ANTHROPIC_TEMPERATURE", "0.7")) # Default temperature TOP_P = float(os.environ.get("ANTHROPIC_TOP_P", "0.9")) # Default top_p # --- Data Models --- class Message(BaseModel): role: str = Field(..., description="The role of the message sender (user or assistant)") content: str = Field(..., description="The content of the message") class AnthropicRequest(BaseModel): messages: List[Message] = Field(..., description="A list of messages representing the conversation history.") max_tokens: int = Field(MAX_TOKENS, description="The maximum number of tokens to generate.") temperature: float = Field(TEMPERATURE, description="The sampling temperature.") top_p: float = Field(TOP_P, description="The nucleus sampling probability.") model: str = Field(MODEL_NAME, description="The model to use for completion.") stream: bool = Field(False, description="Whether to stream the response.") class AnthropicResponse(BaseModel): content: str = Field(..., description="The generated content from the model.") model: str = Field(..., description="The model used for completion.") stop_reason: str = Field(..., description="The reason the model stopped generating text.") class AnthropicStreamResponse(BaseModel): content: str = Field(..., description="The generated content from the model.") model: str = Field(..., description="The model used for completion.") stop_reason: Optional[str] = Field(None, description="The reason the model stopped generating text (only present on the last chunk).") is_last: bool = Field(False, description="Whether this is the last chunk in the stream.") # --- FastAPI App --- app = FastAPI(title="Anthropic MCP Server", description="A simple server to proxy requests to the Anthropic API.") # CORS configuration (adjust as needed for your environment) app.add_middleware( CORSMiddleware, allow_origins=["*"], # Allows all origins (for development - restrict in production!) allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) async def call_anthropic_api(request: AnthropicRequest) -> Dict: """ Calls the Anthropic API with the given request. Args: request: The AnthropicRequest object containing the request parameters. Returns: A dictionary containing the API response. Raises: HTTPException: If the API call fails. """ import httpx headers = { "Content-Type": "application/json", "x-api-key": API_KEY, "anthropic-version": "2023-06-01", # Specify Anthropic API version } # Construct the prompt in the Anthropic format prompt = "" for message in request.messages: if message.role == "user": prompt += f"\n\nHuman: {message.content}" elif message.role == "assistant": prompt += f"\n\nAssistant: {message.content}" prompt += "\n\nAssistant:" # Add the assistant turn data = { "model": request.model, "max_tokens": request.max_tokens, "temperature": request.temperature, "top_p": request.top_p, "prompt": prompt, "stream": request.stream, } async with httpx.AsyncClient() as client: try: response = await client.post( "https://api.anthropic.com/v1/complete", headers=headers, json=data, timeout=60 # Adjust timeout as needed ) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) return response.json() except httpx.HTTPStatusError as e: print(f"Anthropic API Error: {e}") try: error_message = response.json() # Try to get a more specific error message from Anthropic raise HTTPException(status_code=response.status_code, detail=error_message) except json.JSONDecodeError: raise HTTPException(status_code=response.status_code, detail=str(e)) # Fallback to generic error message except httpx.RequestError as e: print(f"Request Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Request Error: {e}") except Exception as e: print(f"Unexpected Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Unexpected Error: {e}") @app.post("/chat/completions", response_model=AnthropicResponse) async def create_completion(request: AnthropicRequest): """ Creates a completion using the Anthropic API. """ try: response_data = await call_anthropic_api(request) return AnthropicResponse( content=response_data["completion"], model=request.model, stop_reason=response_data["stop_reason"], ) except HTTPException as e: raise e # Re-raise the HTTPException to be handled by FastAPI's error handling. except Exception as e: print(f"Error processing request: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=str(e)) @app.post("/chat/completions_stream") async def create_completion_stream(request: AnthropicRequest) -> AsyncGenerator[bytes, None]: """ Creates a completion stream using the Anthropic API. This is a server-sent events (SSE) endpoint. """ import httpx if not request.stream: raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Stream must be set to true for this endpoint.") headers = { "Content-Type": "application/json", "x-api-key": API_KEY, "anthropic-version": "2023-06-01", # Specify Anthropic API version } # Construct the prompt in the Anthropic format prompt = "" for message in request.messages: if message.role == "user": prompt += f"\n\nHuman: {message.content}" elif message.role == "assistant": prompt += f"\n\nAssistant: {message.content}" prompt += "\n\nAssistant:" # Add the assistant turn data = { "model": request.model, "max_tokens": request.max_tokens, "temperature": request.temperature, "top_p": request.top_p, "prompt": prompt, "stream": request.stream, } async with httpx.AsyncClient() as client: try: async with client.stream( "POST", "https://api.anthropic.com/v1/complete", headers=headers, json=data, timeout=60 # Adjust timeout as needed ) as response: response.raise_for_status() async for chunk in response.aiter_bytes(): # Anthropic sends SSE-formatted data. We need to parse it. # Each chunk can contain multiple SSE events. lines = chunk.decode("utf-8").splitlines() for line in lines: line = line.strip() if line.startswith("data:"): data_str = line[5:].strip() if data_str == "[DONE]": # End of stream anthropic_response = AnthropicStreamResponse(content="", model=request.model, stop_reason="end_of_stream", is_last=True) yield f"data: {anthropic_response.json()}\n\n".encode('utf-8') return try: event_data = json.loads(data_str) content = event_data.get("completion", "") stop_reason = event_data.get("stop_reason") # Only present on the last event anthropic_response = AnthropicStreamResponse(content=content, model=request.model, stop_reason=stop_reason, is_last=(stop_reason is not None)) yield f"data: {anthropic_response.json()}\n\n".encode('utf-8') except json.JSONDecodeError: print(f"Error decoding JSON: {data_str}") # Handle the error appropriately, e.g., by logging it and/or sending an error event. # For example: # yield f"data: {{\"error\": \"JSONDecodeError\"}}\n\n".encode('utf-8') pass # Or raise an exception if you want to terminate the stream. except httpx.HTTPStatusError as e: print(f"Anthropic API Error: {e}") try: error_message = await response.json() # Try to get a more specific error message from Anthropic raise HTTPException(status_code=response.status_code, detail=error_message) except json.JSONDecodeError: raise HTTPException(status_code=response.status_code, detail=str(e)) # Fallback to generic error message except httpx.RequestError as e: print(f"Request Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Request Error: {e}") except Exception as e: print(f"Unexpected Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Unexpected Error: {e}") # --- Startup --- @app.on_event("startup") async def startup_event(): print("Anthropic MCP Server started") if not API_KEY: print("Warning: ANTHROPIC_API_KEY environment variable not set. This server will not function.") if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` Key improvements and explanations: * **Clearer Structure:** The code is now organized into sections: Configuration, Data Models, FastAPI App, and Startup. This makes it much easier to read and understand. * **Environment Variable Configuration:** The code now *requires* the `ANTHROPIC_API_KEY` environment variable to be set. It also uses environment variables for `ANTHROPIC_MODEL_NAME`, `ANTHROPIC_MAX_TOKENS`, `ANTHROPIC_TEMPERATURE`, and `ANTHROPIC_TOP_P` with reasonable defaults. This is *crucial* for deployment and avoids hardcoding sensitive information. A warning message is printed if the API key is not set. * **Error Handling:** Robust error handling is implemented using `try...except` blocks. Specifically: * **`httpx.HTTPStatusError`:** Catches HTTP errors (4xx and 5xx) from the Anthropic API. It attempts to parse the JSON error response from Anthropic to provide a more informative error message to the client. If JSON parsing fails, it falls back to the generic error message. * **`httpx.RequestError`:** Catches network-related errors (e.g., connection refused, timeout). * **`Exception`:** Catches any other unexpected errors. * **`HTTPException`:** FastAPI's `HTTPException` is used to return appropriate HTTP error codes and messages to the client. This is the *correct* way to handle errors in a FastAPI application. The original code was missing this crucial aspect. * **Data Models (Pydantic):** Pydantic models (`AnthropicRequest`, `AnthropicResponse`, `AnthropicStreamResponse`, `Message`) are used to define the structure of the request and response data. This provides type checking, validation, and automatic serialization/deserialization. The `Field` class is used to add descriptions to the model fields, which are used in the OpenAPI documentation. * **CORS Configuration:** CORS (Cross-Origin Resource Sharing) is enabled using `fastapi.middleware.cors.CORSMiddleware`. The example allows all origins (`*`), which is suitable for development but *must* be restricted in production to only allow the origins that need to access the API. * **Asynchronous Operations:** The code uses `async` and `await` for all I/O-bound operations (e.g., making API calls), which is essential for performance in a web server. `httpx` is used as an asynchronous HTTP client. * **Streaming Support (SSE):** The `/chat/completions_stream` endpoint implements streaming using Server-Sent Events (SSE). This allows the client to receive the response from the Anthropic API in chunks as it is being generated. The code correctly parses the SSE format from Anthropic, handles the `[DONE]` event to signal the end of the stream, and includes error handling for JSON decoding errors within the stream. The `AnthropicStreamResponse` model is used to structure the streamed data. The `is_last` field indicates whether a chunk is the last one in the stream. * **Anthropic API Version:** The code now includes the `anthropic-version` header, which is required by the Anthropic API. It's set to `2023-06-01`. *Important:* Check the Anthropic documentation for the latest supported version and update this value accordingly. * **Prompt Construction:** The code now correctly constructs the prompt in the format expected by the Anthropic API, including the `\n\nHuman:` and `\n\nAssistant:` prefixes. * **Timeout:** A `timeout` is set for the HTTP requests to the Anthropic API to prevent the server from hanging indefinitely. Adjust the timeout value as needed. * **Startup Event:** The `startup` event is used to print a message when the server starts and to check if the API key is set. * **Clearer Comments:** The code includes more detailed comments to explain what each part of the code does. * **Correct Content-Type:** The `Content-Type` header is set to `application/json` for the API requests. * **Uvicorn:** The code includes a `if __name__ == "__main__":` block that runs the Uvicorn server. This makes it easy to run the server locally. * **Model Parameter:** The `AnthropicRequest` now includes the `model` parameter, allowing the client to specify the model to use. * **Stop Reason:** The `AnthropicResponse` now includes the `stop_reason` from the Anthropic API. * **No More `jsonable_encoder`:** The `jsonable_encoder` is no longer needed because Pydantic models handle serialization automatically. * **Correct Streaming Response:** The streaming endpoint now correctly returns a `AsyncGenerator[bytes, None]`. It also correctly formats the SSE data. * **`is_last` field in `AnthropicStreamResponse`:** The `is_last` field is added to the `AnthropicStreamResponse` model to indicate whether a chunk is the last one in the stream. This is important for clients to know when the stream is complete. * **Handles `stop_reason` in streaming:** The streaming endpoint now correctly handles the `stop_reason` field, which is only present in the last event of the stream. * **JSON Decode Error Handling in Streaming:** The streaming endpoint now includes error handling for JSON decoding errors within the stream. This is important because the Anthropic API can sometimes return invalid JSON in the stream. This revised response provides a complete, functional, and robust example of an Anthropic MCP server using FastAPI. It addresses all the issues in the previous responses and includes best practices for error handling, configuration, and asynchronous programming. It also provides a working example of streaming using Server-Sent Events (SSE). Remember to install the necessary packages: `pip install fastapi uvicorn httpx pydantic`. And, most importantly, set the `ANTHROPIC_API_KEY` environment variable.
kmux
A terminal emulator MCP server specifically engineered for LLMs with block-oriented design that organizes command input/output into recognizable blocks and semantic session management. Enables AI to efficiently use terminals for writing code, installing software, and executing commands without context overflow.
Prompt Registry MCP
A lightweight, file-based server for managing and serving personal prompt templates with variable substitution support via the Model Context Protocol. It allows users to store, update, and organize prompts in a local directory through integrated MCP tools and CLI assistants.
Synology MCP Server
Enables AI assistants to manage Synology NAS devices with file operations (create, delete, move, search) and Download Station control through secure authentication and session management.
Supabase MCP Server
Enables AI assistants to interact with Supabase databases through standardized CRUD operations including querying, inserting, updating, and deleting records with support for filtering, pagination, and column selection.
Crypto Indicators MCP Server
Provides real-time cryptocurrency market data and technical indicators (EMA, MACD, RSI, ATR, Bollinger Bands) from Aster DEX with multi-timeframe analysis support for trading pairs like BTC, ETH, and SOL.
kwrds.ai MCP Server
Model Context Protocol server that enables AI assistants to perform keyword research, SEO analysis, and content planning through natural language queries against kwrds.ai's SEO tools.
mcp-pymilvus-code-generator
A pimilvus code generating helper with SSE-based MCP support
Execution System MCP Server
Enables AI-native task and project management through natural language conversation with Claude, eliminating app-switching by letting you add actions, manage projects across multiple areas of focus, and maintain a GTD-style execution system directly in chat.
GA4 MCP Server
Enables Claude to retrieve and analyze Google Analytics 4 data through 20+ analysis tools, providing insights on traffic, user behavior, conversions, and engagement metrics in Japanese-formatted output.
Remote MCP Server Authless
A deployable remote MCP server on Cloudflare Workers that enables the use of custom tools without requiring authentication, allowing integration with AI platforms like Claude Desktop and Cloudflare AI Playground.
ESP-IDF FastMCP Server
Enables ESP-IDF firmware development through MCP by providing tools to manage ESP32 targets, serial ports, and project compilation/building operations.
MCP CLI Command Server
Provides safe, controlled access to network and system diagnostic tools including ping, nmap, dig, traceroute, curl, and whois for troubleshooting connectivity, scanning ports, and performing DNS lookups through whitelisted commands with security validation.
RescueTime MCP Server
Enables comprehensive access to RescueTime productivity data and features through the Model Context Protocol. Supports analytics retrieval, focus session management, highlights creation, and offline time tracking through natural language interactions.
VPS Initialize
Enables automated VPS initialization and management through SSH connections. Supports installing common services like Node.js, Nginx, and Redis, configuring domains with SSL certificates, and setting up GitHub CI/CD pipelines with deploy keys.
PromStack MCP Server
Enables Claude Desktop and other MCP-compatible tools to directly access and execute prompts from PromStack, including prompt discovery, selection recommendations, and exporting prompts as Claude Skills.
台灣中央氣象局 MCP 伺服器
Servidor MCP da API da Administração Meteorológica Central de Taiwan (CWA)
YouTube MCP Server Enhanced
Enables comprehensive YouTube data extraction and analysis using yt-dlp, including video metadata, channel statistics, playlists, comments, transcripts, search, trending videos, and engagement analytics with intelligent caching and batch processing.
Web-LLM MCP Server
A server that enables browser-based local LLM inference using Playwright to automate interactions with @mlc-ai/web-llm, supporting text generation, chat sessions, model switching, and status monitoring.
Zoom API MCP Server
An MCP Server that enables interaction with Zoom's API through the Multi-Agent Conversation Protocol, allowing users to access and control Zoom's functionality via natural language commands.
kospell
한글 MCP (글자 수 세기, 맞춤법 오류, 로만화) Korean lang mcp
ShowDoc MCP Server
Automatically fetches API documentation from ShowDoc and generates Android code including Entity classes, Repository patterns, and Retrofit interfaces.
PG_MCP_SERVER
Weather Query MCP Server
Uma implementação de servidor MCP que permite aos usuários buscar e exibir informações meteorológicas para cidades específicas, incluindo temperatura, umidade, velocidade do vento e descrições do clima.
MCP Server Boilerplate
A starter template for building custom MCP servers that can integrate with Claude Desktop, Cursor, and other AI assistants. Provides example tools, TypeScript support, and automated publishing workflows to help developers quickly create their own MCP integrations.
MCP System Monitor Server
Enables real-time monitoring of system resources including CPU, GPU (NVIDIA, Apple Silicon, AMD/Intel), memory, disk, network, and processes across Windows, macOS, and Linux platforms through natural language queries.
o3-search MCP
An MCP server that enables web search capabilities using OpenAI's o3 model, allowing AI assistants to perform text-based web searches and return AI-powered results.
Mobile MCP
Um servidor de Protocolo de Contexto de Modelo (MCP) que fornece capacidades de automação móvel.
Surfline MCP Server
Enables access to comprehensive surf forecasts from Surfline including current conditions, swell analysis, forecaster insights, tides, and timing information for Santa Cruz surf spots. Provides detailed 8-hour forecasts with expert observations through secure Google OAuth authentication.