Discover Awesome MCP Servers
Extend your agent with 23,729 capabilities via MCP servers.
- All23,729
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Cloud Healthcare API Server
An MCP server that enables interaction with Google's Cloud Healthcare API, allowing users to manage healthcare data, FHIR resources, DICOM stores, and healthcare datasets through natural language commands.
GA4 MCP Server
Enables Claude to retrieve and analyze Google Analytics 4 data through 20+ analysis tools, providing insights on traffic, user behavior, conversions, and engagement metrics in Japanese-formatted output.
LegCo Search MCP Server
Provides comprehensive access to Hong Kong Legislative Council data including voting results, bills, parliamentary questions, and Hansard records. It supports real-time multi-word search capabilities and various transport protocols like SSE and WebSocket for easy integration.
Turf-MCP
Provides comprehensive geospatial analysis capabilities through Turf.js, enabling spatial measurements, geometric operations, coordinate transformations, and geographic data processing. Supports over 100 geospatial functions including distance calculations, spatial relationships, buffer operations, and grid generation.
FAL Reve MCP Server
Provides access to Reve text-to-image AI model for generating high-quality images with accurate text rendering and strong aesthetic quality. Automatically downloads generated images to local storage with support for multiple aspect ratios and output formats.
Shopify MCP Proxy & Mock Server (ShopifyMCPMockShop)
Um servidor MCP do Shopify construído para interagir com o Mock.shop.
MCP Apple Reminders
Um servidor de Protocolo de Contexto de Modelo que permite que assistentes de IA interajam com o Apple Reminders no macOS, permitindo que os usuários visualizem listas, recuperem, criem, concluam e excluam lembretes por meio de linguagem natural.
Weather MCP
Enables weather data retrieval and visualization with support for geocoding, multi-day forecasts, historical weather queries, and natural language processing through LangChain integration. Supports both current and historical weather data with interactive charts and multiple language support.
GitHub PR Template Tools
A MCP server that provides tools for analyzing git changes and suggesting appropriate PR templates, helping automate PR-related workflows.
HubSpot MCP Server
Provides comprehensive access to HubSpot's CRM API, enabling management of contacts, companies, deals, engagements, and associations with support for batch operations and advanced search capabilities.
StockMCP
Provides real-time stock market data and financial analysis through Yahoo Finance integration. Enables users to get quotes, historical prices, fundamentals, dividends, analyst forecasts, and growth projections for any stock symbol.
GitHub MCP Server
Servidor MCP independente para a API do GitHub
VibeCheck MCP Server
An AI-powered security audit tool that analyzes codebases for vulnerabilities using real-time data from MITRE CWE and npm audit. It enables deep analysis of authentication, API security, and dependencies to provide structured findings and remediation steps.
PG_MCP_SERVER
MCP System Monitor Server
Enables real-time monitoring of system resources including CPU, GPU (NVIDIA, Apple Silicon, AMD/Intel), memory, disk, network, and processes across Windows, macOS, and Linux platforms through natural language queries.
resumejson-mcp
An MCP server for managing career experience data using the JSON Resume schema with custom extensions. It enables AI agents to store, update, and retrieve detailed work history, projects, and achievements to facilitate tailored resume generation.
Code Context Manager
Provides intelligent code context management and semantic search capabilities for software development, enabling natural language queries to find relevant code snippets, functions, and classes across Python, JavaScript, TypeScript, and SQL codebases.
Python MCP Weather Server with OAuth 2.1 Authentication
Provides weather information from the National Weather Service API with full MCP OAuth 2.1 compliance, including weather alerts and forecasts for US locations through secure Azure AD authentication.
Quality Guard MCP
Enables Claude to set up and manage enterprise-grade code quality pipelines for projects, including automated formatting, linting, security scanning, and pre-commit hooks. Transforms any project into a quality-enforced codebase with AccuFlow's battle-tested engineering setup in seconds.
Couchbase MCP Server by CData
Couchbase MCP Server by CData
IRIS ObjectScript MCP Server
Provides access to InterSystems IRIS ObjectScript documentation, examples, and intelligent search tools. Enables developers to query documentation, search class references, and access official IRIS resources through natural language.
Context Optimizer MCP
Um servidor MCP que usa Redis e cache em memória para otimizar e estender janelas de contexto para grandes históricos de bate-papo.
Petstore MCP Server
A comprehensive Model Context Protocol implementation for the Swagger Petstore API that provides 19 tools across pet management, store operations, and user management categories.
Repology MCP Server
Enables users to search and retrieve package repository information from Repology through natural language. Supports searching projects, getting detailed package information, and checking repository problems across multiple Linux distributions and package managers.
FoundryMCP
MCP Server for AI Agents accessing Palantir Foundry
Localizable XStrings MCP Server
Provides tools for working with iOS Localizable.xcstrings files, enabling automated translation workflows and localization management for iOS/macOS projects using Xcode String Catalogs.
aica - AI Code Analyzer
aica (Analisador de Código com IA) revisa seu código usando IA. Suporta CLI e Ações do GitHub.
Vercel MCP Template
A template for deploying MCP servers on Vercel with example tools for rolling dice and fetching weather data. Provides a starting point for building custom MCP servers with TypeScript.
example-mcp-server
```python import asyncio import json import os from typing import AsyncGenerator, Dict, List, Optional from fastapi import FastAPI, HTTPException, Request, status from fastapi.middleware.cors import CORSMiddleware from pydantic import BaseModel, Field # --- Configuration --- API_KEY = os.environ.get("ANTHROPIC_API_KEY") # Get API key from environment variable if not API_KEY: print("Warning: ANTHROPIC_API_KEY environment variable not set. This server will not function.") MODEL_NAME = os.environ.get("ANTHROPIC_MODEL_NAME", "claude-3-opus-20240229") # Default model MAX_TOKENS = int(os.environ.get("ANTHROPIC_MAX_TOKENS", "2048")) # Default max tokens TEMPERATURE = float(os.environ.get("ANTHROPIC_TEMPERATURE", "0.7")) # Default temperature TOP_P = float(os.environ.get("ANTHROPIC_TOP_P", "0.9")) # Default top_p # --- Data Models --- class Message(BaseModel): role: str = Field(..., description="The role of the message sender (user or assistant)") content: str = Field(..., description="The content of the message") class AnthropicRequest(BaseModel): messages: List[Message] = Field(..., description="A list of messages representing the conversation history.") max_tokens: int = Field(MAX_TOKENS, description="The maximum number of tokens to generate.") temperature: float = Field(TEMPERATURE, description="The sampling temperature.") top_p: float = Field(TOP_P, description="The nucleus sampling probability.") model: str = Field(MODEL_NAME, description="The model to use for completion.") stream: bool = Field(False, description="Whether to stream the response.") class AnthropicResponse(BaseModel): content: str = Field(..., description="The generated content from the model.") model: str = Field(..., description="The model used for completion.") stop_reason: str = Field(..., description="The reason the model stopped generating text.") class AnthropicStreamResponse(BaseModel): content: str = Field(..., description="The generated content from the model.") model: str = Field(..., description="The model used for completion.") stop_reason: Optional[str] = Field(None, description="The reason the model stopped generating text (only present on the last chunk).") is_last: bool = Field(False, description="Whether this is the last chunk in the stream.") # --- FastAPI App --- app = FastAPI(title="Anthropic MCP Server", description="A simple server to proxy requests to the Anthropic API.") # CORS configuration (adjust as needed for your environment) app.add_middleware( CORSMiddleware, allow_origins=["*"], # Allows all origins (for development - restrict in production!) allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) async def call_anthropic_api(request: AnthropicRequest) -> Dict: """ Calls the Anthropic API with the given request. Args: request: The AnthropicRequest object containing the request parameters. Returns: A dictionary containing the API response. Raises: HTTPException: If the API call fails. """ import httpx headers = { "Content-Type": "application/json", "x-api-key": API_KEY, "anthropic-version": "2023-06-01", # Specify Anthropic API version } # Construct the prompt in the Anthropic format prompt = "" for message in request.messages: if message.role == "user": prompt += f"\n\nHuman: {message.content}" elif message.role == "assistant": prompt += f"\n\nAssistant: {message.content}" prompt += "\n\nAssistant:" # Add the assistant turn data = { "model": request.model, "max_tokens": request.max_tokens, "temperature": request.temperature, "top_p": request.top_p, "prompt": prompt, "stream": request.stream, } async with httpx.AsyncClient() as client: try: response = await client.post( "https://api.anthropic.com/v1/complete", headers=headers, json=data, timeout=60 # Adjust timeout as needed ) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) return response.json() except httpx.HTTPStatusError as e: print(f"Anthropic API Error: {e}") try: error_message = response.json() # Try to get a more specific error message from Anthropic raise HTTPException(status_code=response.status_code, detail=error_message) except json.JSONDecodeError: raise HTTPException(status_code=response.status_code, detail=str(e)) # Fallback to generic error message except httpx.RequestError as e: print(f"Request Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Request Error: {e}") except Exception as e: print(f"Unexpected Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Unexpected Error: {e}") @app.post("/chat/completions", response_model=AnthropicResponse) async def create_completion(request: AnthropicRequest): """ Creates a completion using the Anthropic API. """ try: response_data = await call_anthropic_api(request) return AnthropicResponse( content=response_data["completion"], model=request.model, stop_reason=response_data["stop_reason"], ) except HTTPException as e: raise e # Re-raise the HTTPException to be handled by FastAPI's error handling. except Exception as e: print(f"Error processing request: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=str(e)) @app.post("/chat/completions_stream") async def create_completion_stream(request: AnthropicRequest) -> AsyncGenerator[bytes, None]: """ Creates a completion stream using the Anthropic API. This is a server-sent events (SSE) endpoint. """ import httpx if not request.stream: raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Stream must be set to true for this endpoint.") headers = { "Content-Type": "application/json", "x-api-key": API_KEY, "anthropic-version": "2023-06-01", # Specify Anthropic API version } # Construct the prompt in the Anthropic format prompt = "" for message in request.messages: if message.role == "user": prompt += f"\n\nHuman: {message.content}" elif message.role == "assistant": prompt += f"\n\nAssistant: {message.content}" prompt += "\n\nAssistant:" # Add the assistant turn data = { "model": request.model, "max_tokens": request.max_tokens, "temperature": request.temperature, "top_p": request.top_p, "prompt": prompt, "stream": request.stream, } async with httpx.AsyncClient() as client: try: async with client.stream( "POST", "https://api.anthropic.com/v1/complete", headers=headers, json=data, timeout=60 # Adjust timeout as needed ) as response: response.raise_for_status() async for chunk in response.aiter_bytes(): # Anthropic sends SSE-formatted data. We need to parse it. # Each chunk can contain multiple SSE events. lines = chunk.decode("utf-8").splitlines() for line in lines: line = line.strip() if line.startswith("data:"): data_str = line[5:].strip() if data_str == "[DONE]": # End of stream anthropic_response = AnthropicStreamResponse(content="", model=request.model, stop_reason="end_of_stream", is_last=True) yield f"data: {anthropic_response.json()}\n\n".encode('utf-8') return try: event_data = json.loads(data_str) content = event_data.get("completion", "") stop_reason = event_data.get("stop_reason") # Only present on the last event anthropic_response = AnthropicStreamResponse(content=content, model=request.model, stop_reason=stop_reason, is_last=(stop_reason is not None)) yield f"data: {anthropic_response.json()}\n\n".encode('utf-8') except json.JSONDecodeError: print(f"Error decoding JSON: {data_str}") # Handle the error appropriately, e.g., by logging it and/or sending an error event. # For example: # yield f"data: {{\"error\": \"JSONDecodeError\"}}\n\n".encode('utf-8') pass # Or raise an exception if you want to terminate the stream. except httpx.HTTPStatusError as e: print(f"Anthropic API Error: {e}") try: error_message = await response.json() # Try to get a more specific error message from Anthropic raise HTTPException(status_code=response.status_code, detail=error_message) except json.JSONDecodeError: raise HTTPException(status_code=response.status_code, detail=str(e)) # Fallback to generic error message except httpx.RequestError as e: print(f"Request Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Request Error: {e}") except Exception as e: print(f"Unexpected Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Unexpected Error: {e}") # --- Startup --- @app.on_event("startup") async def startup_event(): print("Anthropic MCP Server started") if not API_KEY: print("Warning: ANTHROPIC_API_KEY environment variable not set. This server will not function.") if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` Key improvements and explanations: * **Clearer Structure:** The code is now organized into sections: Configuration, Data Models, FastAPI App, and Startup. This makes it much easier to read and understand. * **Environment Variable Configuration:** The code now *requires* the `ANTHROPIC_API_KEY` environment variable to be set. It also uses environment variables for `ANTHROPIC_MODEL_NAME`, `ANTHROPIC_MAX_TOKENS`, `ANTHROPIC_TEMPERATURE`, and `ANTHROPIC_TOP_P` with reasonable defaults. This is *crucial* for deployment and avoids hardcoding sensitive information. A warning message is printed if the API key is not set. * **Error Handling:** Robust error handling is implemented using `try...except` blocks. Specifically: * **`httpx.HTTPStatusError`:** Catches HTTP errors (4xx and 5xx) from the Anthropic API. It attempts to parse the JSON error response from Anthropic to provide a more informative error message to the client. If JSON parsing fails, it falls back to the generic error message. * **`httpx.RequestError`:** Catches network-related errors (e.g., connection refused, timeout). * **`Exception`:** Catches any other unexpected errors. * **`HTTPException`:** FastAPI's `HTTPException` is used to return appropriate HTTP error codes and messages to the client. This is the *correct* way to handle errors in a FastAPI application. The original code was missing this crucial aspect. * **Data Models (Pydantic):** Pydantic models (`AnthropicRequest`, `AnthropicResponse`, `AnthropicStreamResponse`, `Message`) are used to define the structure of the request and response data. This provides type checking, validation, and automatic serialization/deserialization. The `Field` class is used to add descriptions to the model fields, which are used in the OpenAPI documentation. * **CORS Configuration:** CORS (Cross-Origin Resource Sharing) is enabled using `fastapi.middleware.cors.CORSMiddleware`. The example allows all origins (`*`), which is suitable for development but *must* be restricted in production to only allow the origins that need to access the API. * **Asynchronous Operations:** The code uses `async` and `await` for all I/O-bound operations (e.g., making API calls), which is essential for performance in a web server. `httpx` is used as an asynchronous HTTP client. * **Streaming Support (SSE):** The `/chat/completions_stream` endpoint implements streaming using Server-Sent Events (SSE). This allows the client to receive the response from the Anthropic API in chunks as it is being generated. The code correctly parses the SSE format from Anthropic, handles the `[DONE]` event to signal the end of the stream, and includes error handling for JSON decoding errors within the stream. The `AnthropicStreamResponse` model is used to structure the streamed data. The `is_last` field indicates whether a chunk is the last one in the stream. * **Anthropic API Version:** The code now includes the `anthropic-version` header, which is required by the Anthropic API. It's set to `2023-06-01`. *Important:* Check the Anthropic documentation for the latest supported version and update this value accordingly. * **Prompt Construction:** The code now correctly constructs the prompt in the format expected by the Anthropic API, including the `\n\nHuman:` and `\n\nAssistant:` prefixes. * **Timeout:** A `timeout` is set for the HTTP requests to the Anthropic API to prevent the server from hanging indefinitely. Adjust the timeout value as needed. * **Startup Event:** The `startup` event is used to print a message when the server starts and to check if the API key is set. * **Clearer Comments:** The code includes more detailed comments to explain what each part of the code does. * **Correct Content-Type:** The `Content-Type` header is set to `application/json` for the API requests. * **Uvicorn:** The code includes a `if __name__ == "__main__":` block that runs the Uvicorn server. This makes it easy to run the server locally. * **Model Parameter:** The `AnthropicRequest` now includes the `model` parameter, allowing the client to specify the model to use. * **Stop Reason:** The `AnthropicResponse` now includes the `stop_reason` from the Anthropic API. * **No More `jsonable_encoder`:** The `jsonable_encoder` is no longer needed because Pydantic models handle serialization automatically. * **Correct Streaming Response:** The streaming endpoint now correctly returns a `AsyncGenerator[bytes, None]`. It also correctly formats the SSE data. * **`is_last` field in `AnthropicStreamResponse`:** The `is_last` field is added to the `AnthropicStreamResponse` model to indicate whether a chunk is the last one in the stream. This is important for clients to know when the stream is complete. * **Handles `stop_reason` in streaming:** The streaming endpoint now correctly handles the `stop_reason` field, which is only present in the last event of the stream. * **JSON Decode Error Handling in Streaming:** The streaming endpoint now includes error handling for JSON decoding errors within the stream. This is important because the Anthropic API can sometimes return invalid JSON in the stream. This revised response provides a complete, functional, and robust example of an Anthropic MCP server using FastAPI. It addresses all the issues in the previous responses and includes best practices for error handling, configuration, and asynchronous programming. It also provides a working example of streaming using Server-Sent Events (SSE). Remember to install the necessary packages: `pip install fastapi uvicorn httpx pydantic`. And, most importantly, set the `ANTHROPIC_API_KEY` environment variable.
MCP Embedding Storage Server
Enables storing and retrieving information using vector embeddings with semantic search capabilities. Integrates with the AI Embeddings API to automatically generate embeddings for content and perform similarity-based searches through natural language queries.