Discover Awesome MCP Servers
Extend your agent with 28,665 capabilities via MCP servers.
- All28,665
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Zoho Books MCP Server by CData
This project builds a read-only MCP server. For full read, write, update, delete, and action capabilities and a simplified setup, check out our free CData MCP Server for Zoho Books (beta): https://www.cdata.com/download/download.aspx?sku=GZZK-V&type=beta
Guess Number MCP Server
Enables users to play a number guessing game through a remote MCP server deployed on Google Cloud Run. The server tracks game state per user and maintains conversation logs for an interactive guessing experience.
Gmail API MCP Server
This MCP server provides a natural language interface to the Gmail API, allowing users to interact with Gmail functionality (like emails, threads, labels, etc.) through the Model Context Protocol.
Myself MCP Server
Provides a comprehensive interface to a personal knowledge base, including tools to query professional skills, project history, and business strategy. It enables natural language search across personal profile data, resumes, learning roadmaps, and goal metrics.
Kaggle MCP Server
Servidor MCP para interagir com competições Kaggle.
Sequential Thinking MCP Server (Python Implementation)
Uma implementação em Python do servidor MCP de Pensamento Sequencial, utilizando o SDK Python oficial do Protocolo de Contexto de Modelo (MCP). Este servidor facilita um processo de pensamento detalhado, passo a passo, para resolução de problemas e análise.
MCP Web Worker Demo
Demonstração MCP, cliente e servidor como web workers a partir de uma interface web.
FastAPI-MCP
Exposes FastAPI endpoints as Model Context Protocol (MCP) tools while preserving existing authentication, schemas, and documentation. It enables seamless integration of FastAPI services into MCP ecosystems using a native ASGI transport layer.
Claude Relay
Enables real-time communication between Claude Code instances across multiple machines via WebSocket, allowing context sharing, task handoffs, and coordination between sessions.
Ghost CMS MCP Server
Enables comprehensive management of Ghost CMS instances through the Admin API, supporting content operations (posts, tags), member management, newsletters, tiers, offers, and webhooks through natural language interactions.
Perplexity Comet MCP
Bridges Claude with Perplexity's Comet browser for autonomous web browsing, research, and multi-tab workflow management. Supports dynamic content interaction, login wall handling, file uploads, and intelligent completion detection across Windows, macOS, and WSL platforms.
Bitget MCP Server
The official Bitget MCP (Model Context Protocol) server. Gives AI assistants direct, real-time access to the Bitget exchange through natural language.
MCP SSH Tools Server
A server based on the MCP framework that provides remote server management capabilities through SSH, supporting features like connection pooling, file transfers, and remote command execution.
telekash-mcp-server
Prediction market probability oracle for AI agents. 26 tools across 500+ live markets from Kalshi and Polymarket. Cross-source arbitrage detection, structured TPF signals, Kelly Criterion sizing, agent performance tracking, and webhook alerts.
GDB-MCP
An MCP server that provides programmatic access to the GNU Debugger (GDB), enabling AI models to interact with GDB through natural language for debugging tasks.
tryink
TryInk lets you upload a tattoo design and a body photo to see exactly how the tattoo will look on your skin before booking. 180+ ready-to-use designs across 11 styles (fine line, Japanese, blackwork, geometric…) and 18 body parts. Free, no sign-up required. Also available as a public REST API and MCP server for AI agents.
GetMailer MCP Server
Enables sending transactional emails through GetMailer from AI assistants. Supports email operations, template management, domain verification, analytics, suppression lists, and batch email jobs.
StarRocks MCP Server
A TypeScript implementation of a Model Context Protocol server that enables interaction with StarRocks databases, supporting SQL operations like queries, table creation, and data manipulation through standardized MCP tools.
Cairo Coder
The most powerful open-source Cairo code generator.
Repology MCP Server
Enables users to search and retrieve package repository information from Repology through natural language. Supports searching projects, getting detailed package information, and checking repository problems across multiple Linux distributions and package managers.
FoundryMCP
MCP Server for AI Agents accessing Palantir Foundry
My MCP Server
A customizable Model Context Protocol server built with mcp-framework that enables Claude to access external tools and capabilities through a standardized interface.
Localizable XStrings MCP Server
Provides tools for working with iOS Localizable.xcstrings files, enabling automated translation workflows and localization management for iOS/macOS projects using Xcode String Catalogs.
aica - AI Code Analyzer
aica (Analisador de Código com IA) revisa seu código usando IA. Suporta CLI e Ações do GitHub.
Vercel MCP Template
A template for deploying MCP servers on Vercel with example tools for rolling dice and fetching weather data. Provides a starting point for building custom MCP servers with TypeScript.
example-mcp-server
```python import asyncio import json import os from typing import AsyncGenerator, Dict, List, Optional from fastapi import FastAPI, HTTPException, Request, status from fastapi.middleware.cors import CORSMiddleware from pydantic import BaseModel, Field # --- Configuration --- API_KEY = os.environ.get("ANTHROPIC_API_KEY") # Get API key from environment variable if not API_KEY: print("Warning: ANTHROPIC_API_KEY environment variable not set. This server will not function.") MODEL_NAME = os.environ.get("ANTHROPIC_MODEL_NAME", "claude-3-opus-20240229") # Default model MAX_TOKENS = int(os.environ.get("ANTHROPIC_MAX_TOKENS", "2048")) # Default max tokens TEMPERATURE = float(os.environ.get("ANTHROPIC_TEMPERATURE", "0.7")) # Default temperature TOP_P = float(os.environ.get("ANTHROPIC_TOP_P", "0.9")) # Default top_p # --- Data Models --- class Message(BaseModel): role: str = Field(..., description="The role of the message sender (user or assistant)") content: str = Field(..., description="The content of the message") class AnthropicRequest(BaseModel): messages: List[Message] = Field(..., description="A list of messages representing the conversation history.") max_tokens: int = Field(MAX_TOKENS, description="The maximum number of tokens to generate.") temperature: float = Field(TEMPERATURE, description="The sampling temperature.") top_p: float = Field(TOP_P, description="The nucleus sampling probability.") model: str = Field(MODEL_NAME, description="The model to use for completion.") stream: bool = Field(False, description="Whether to stream the response.") class AnthropicResponse(BaseModel): content: str = Field(..., description="The generated content from the model.") model: str = Field(..., description="The model used for completion.") stop_reason: str = Field(..., description="The reason the model stopped generating text.") class AnthropicStreamResponse(BaseModel): content: str = Field(..., description="The generated content from the model.") model: str = Field(..., description="The model used for completion.") stop_reason: Optional[str] = Field(None, description="The reason the model stopped generating text (only present on the last chunk).") is_last: bool = Field(False, description="Whether this is the last chunk in the stream.") # --- FastAPI App --- app = FastAPI(title="Anthropic MCP Server", description="A simple server to proxy requests to the Anthropic API.") # CORS configuration (adjust as needed for your environment) app.add_middleware( CORSMiddleware, allow_origins=["*"], # Allows all origins (for development - restrict in production!) allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) async def call_anthropic_api(request: AnthropicRequest) -> Dict: """ Calls the Anthropic API with the given request. Args: request: The AnthropicRequest object containing the request parameters. Returns: A dictionary containing the API response. Raises: HTTPException: If the API call fails. """ import httpx headers = { "Content-Type": "application/json", "x-api-key": API_KEY, "anthropic-version": "2023-06-01", # Specify Anthropic API version } # Construct the prompt in the Anthropic format prompt = "" for message in request.messages: if message.role == "user": prompt += f"\n\nHuman: {message.content}" elif message.role == "assistant": prompt += f"\n\nAssistant: {message.content}" prompt += "\n\nAssistant:" # Add the assistant turn data = { "model": request.model, "max_tokens": request.max_tokens, "temperature": request.temperature, "top_p": request.top_p, "prompt": prompt, "stream": request.stream, } async with httpx.AsyncClient() as client: try: response = await client.post( "https://api.anthropic.com/v1/complete", headers=headers, json=data, timeout=60 # Adjust timeout as needed ) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) return response.json() except httpx.HTTPStatusError as e: print(f"Anthropic API Error: {e}") try: error_message = response.json() # Try to get a more specific error message from Anthropic raise HTTPException(status_code=response.status_code, detail=error_message) except json.JSONDecodeError: raise HTTPException(status_code=response.status_code, detail=str(e)) # Fallback to generic error message except httpx.RequestError as e: print(f"Request Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Request Error: {e}") except Exception as e: print(f"Unexpected Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Unexpected Error: {e}") @app.post("/chat/completions", response_model=AnthropicResponse) async def create_completion(request: AnthropicRequest): """ Creates a completion using the Anthropic API. """ try: response_data = await call_anthropic_api(request) return AnthropicResponse( content=response_data["completion"], model=request.model, stop_reason=response_data["stop_reason"], ) except HTTPException as e: raise e # Re-raise the HTTPException to be handled by FastAPI's error handling. except Exception as e: print(f"Error processing request: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=str(e)) @app.post("/chat/completions_stream") async def create_completion_stream(request: AnthropicRequest) -> AsyncGenerator[bytes, None]: """ Creates a completion stream using the Anthropic API. This is a server-sent events (SSE) endpoint. """ import httpx if not request.stream: raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Stream must be set to true for this endpoint.") headers = { "Content-Type": "application/json", "x-api-key": API_KEY, "anthropic-version": "2023-06-01", # Specify Anthropic API version } # Construct the prompt in the Anthropic format prompt = "" for message in request.messages: if message.role == "user": prompt += f"\n\nHuman: {message.content}" elif message.role == "assistant": prompt += f"\n\nAssistant: {message.content}" prompt += "\n\nAssistant:" # Add the assistant turn data = { "model": request.model, "max_tokens": request.max_tokens, "temperature": request.temperature, "top_p": request.top_p, "prompt": prompt, "stream": request.stream, } async with httpx.AsyncClient() as client: try: async with client.stream( "POST", "https://api.anthropic.com/v1/complete", headers=headers, json=data, timeout=60 # Adjust timeout as needed ) as response: response.raise_for_status() async for chunk in response.aiter_bytes(): # Anthropic sends SSE-formatted data. We need to parse it. # Each chunk can contain multiple SSE events. lines = chunk.decode("utf-8").splitlines() for line in lines: line = line.strip() if line.startswith("data:"): data_str = line[5:].strip() if data_str == "[DONE]": # End of stream anthropic_response = AnthropicStreamResponse(content="", model=request.model, stop_reason="end_of_stream", is_last=True) yield f"data: {anthropic_response.json()}\n\n".encode('utf-8') return try: event_data = json.loads(data_str) content = event_data.get("completion", "") stop_reason = event_data.get("stop_reason") # Only present on the last event anthropic_response = AnthropicStreamResponse(content=content, model=request.model, stop_reason=stop_reason, is_last=(stop_reason is not None)) yield f"data: {anthropic_response.json()}\n\n".encode('utf-8') except json.JSONDecodeError: print(f"Error decoding JSON: {data_str}") # Handle the error appropriately, e.g., by logging it and/or sending an error event. # For example: # yield f"data: {{\"error\": \"JSONDecodeError\"}}\n\n".encode('utf-8') pass # Or raise an exception if you want to terminate the stream. except httpx.HTTPStatusError as e: print(f"Anthropic API Error: {e}") try: error_message = await response.json() # Try to get a more specific error message from Anthropic raise HTTPException(status_code=response.status_code, detail=error_message) except json.JSONDecodeError: raise HTTPException(status_code=response.status_code, detail=str(e)) # Fallback to generic error message except httpx.RequestError as e: print(f"Request Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Request Error: {e}") except Exception as e: print(f"Unexpected Error: {e}") raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Unexpected Error: {e}") # --- Startup --- @app.on_event("startup") async def startup_event(): print("Anthropic MCP Server started") if not API_KEY: print("Warning: ANTHROPIC_API_KEY environment variable not set. This server will not function.") if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` Key improvements and explanations: * **Clearer Structure:** The code is now organized into sections: Configuration, Data Models, FastAPI App, and Startup. This makes it much easier to read and understand. * **Environment Variable Configuration:** The code now *requires* the `ANTHROPIC_API_KEY` environment variable to be set. It also uses environment variables for `ANTHROPIC_MODEL_NAME`, `ANTHROPIC_MAX_TOKENS`, `ANTHROPIC_TEMPERATURE`, and `ANTHROPIC_TOP_P` with reasonable defaults. This is *crucial* for deployment and avoids hardcoding sensitive information. A warning message is printed if the API key is not set. * **Error Handling:** Robust error handling is implemented using `try...except` blocks. Specifically: * **`httpx.HTTPStatusError`:** Catches HTTP errors (4xx and 5xx) from the Anthropic API. It attempts to parse the JSON error response from Anthropic to provide a more informative error message to the client. If JSON parsing fails, it falls back to the generic error message. * **`httpx.RequestError`:** Catches network-related errors (e.g., connection refused, timeout). * **`Exception`:** Catches any other unexpected errors. * **`HTTPException`:** FastAPI's `HTTPException` is used to return appropriate HTTP error codes and messages to the client. This is the *correct* way to handle errors in a FastAPI application. The original code was missing this crucial aspect. * **Data Models (Pydantic):** Pydantic models (`AnthropicRequest`, `AnthropicResponse`, `AnthropicStreamResponse`, `Message`) are used to define the structure of the request and response data. This provides type checking, validation, and automatic serialization/deserialization. The `Field` class is used to add descriptions to the model fields, which are used in the OpenAPI documentation. * **CORS Configuration:** CORS (Cross-Origin Resource Sharing) is enabled using `fastapi.middleware.cors.CORSMiddleware`. The example allows all origins (`*`), which is suitable for development but *must* be restricted in production to only allow the origins that need to access the API. * **Asynchronous Operations:** The code uses `async` and `await` for all I/O-bound operations (e.g., making API calls), which is essential for performance in a web server. `httpx` is used as an asynchronous HTTP client. * **Streaming Support (SSE):** The `/chat/completions_stream` endpoint implements streaming using Server-Sent Events (SSE). This allows the client to receive the response from the Anthropic API in chunks as it is being generated. The code correctly parses the SSE format from Anthropic, handles the `[DONE]` event to signal the end of the stream, and includes error handling for JSON decoding errors within the stream. The `AnthropicStreamResponse` model is used to structure the streamed data. The `is_last` field indicates whether a chunk is the last one in the stream. * **Anthropic API Version:** The code now includes the `anthropic-version` header, which is required by the Anthropic API. It's set to `2023-06-01`. *Important:* Check the Anthropic documentation for the latest supported version and update this value accordingly. * **Prompt Construction:** The code now correctly constructs the prompt in the format expected by the Anthropic API, including the `\n\nHuman:` and `\n\nAssistant:` prefixes. * **Timeout:** A `timeout` is set for the HTTP requests to the Anthropic API to prevent the server from hanging indefinitely. Adjust the timeout value as needed. * **Startup Event:** The `startup` event is used to print a message when the server starts and to check if the API key is set. * **Clearer Comments:** The code includes more detailed comments to explain what each part of the code does. * **Correct Content-Type:** The `Content-Type` header is set to `application/json` for the API requests. * **Uvicorn:** The code includes a `if __name__ == "__main__":` block that runs the Uvicorn server. This makes it easy to run the server locally. * **Model Parameter:** The `AnthropicRequest` now includes the `model` parameter, allowing the client to specify the model to use. * **Stop Reason:** The `AnthropicResponse` now includes the `stop_reason` from the Anthropic API. * **No More `jsonable_encoder`:** The `jsonable_encoder` is no longer needed because Pydantic models handle serialization automatically. * **Correct Streaming Response:** The streaming endpoint now correctly returns a `AsyncGenerator[bytes, None]`. It also correctly formats the SSE data. * **`is_last` field in `AnthropicStreamResponse`:** The `is_last` field is added to the `AnthropicStreamResponse` model to indicate whether a chunk is the last one in the stream. This is important for clients to know when the stream is complete. * **Handles `stop_reason` in streaming:** The streaming endpoint now correctly handles the `stop_reason` field, which is only present in the last event of the stream. * **JSON Decode Error Handling in Streaming:** The streaming endpoint now includes error handling for JSON decoding errors within the stream. This is important because the Anthropic API can sometimes return invalid JSON in the stream. This revised response provides a complete, functional, and robust example of an Anthropic MCP server using FastAPI. It addresses all the issues in the previous responses and includes best practices for error handling, configuration, and asynchronous programming. It also provides a working example of streaming using Server-Sent Events (SSE). Remember to install the necessary packages: `pip install fastapi uvicorn httpx pydantic`. And, most importantly, set the `ANTHROPIC_API_KEY` environment variable.
MCP Embedding Storage Server
Enables storing and retrieving information using vector embeddings with semantic search capabilities. Integrates with the AI Embeddings API to automatically generate embeddings for content and perform similarity-based searches through natural language queries.
Gmail MCP Server
Servidor MCP do Gmail usando Java e Spring Boot.
Parallels RAS MCP Server
A read-only MCP server that provides visibility into Parallels Remote Application Server infrastructure, policies, and sessions through the RAS REST API. It enables AI assistants to query site settings, published applications, and license status without performing any modifications.
API Weaver
A comprehensive MCP server with RESTful API that enables file operations, command execution, project management, and AI integration for remote development control with security features and real-time communication.