Discover Awesome MCP Servers
Extend your agent with 14,570 capabilities via MCP servers.
- All14,570
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2

Bilibili MCP Server
Enables interaction with Bilibili (B站) platform through API and web scraping. Supports video search, article search, video info retrieval, comment fetching, danmaku extraction, and article content access.

Xero MCP Server
Máy chủ MCP cho phép Khách hàng tương tác với Phần mềm Kế toán Xero.

GLM-4.5V MCP Server
Enables multimodal AI capabilities through GLM-4.5V API for image processing, visual querying with OCR/QA/detection modes, and file content extraction from various formats including PDFs, documents, and images.
Simple MCP Search Server
FastMCP Server Generator
Một máy chủ MCP chuyên dụng giúp người dùng tạo các máy chủ MCP tùy chỉnh.

Remote MCP Server Authless
A Cloudflare Workers-based remote Model Context Protocol server that operates without authentication requirements, allowing users to deploy custom AI tools that can be accessed from Claude Desktop or the Cloudflare AI Playground.

Mcp Akshare
AKShare là một thư viện giao diện dữ liệu tài chính dựa trên Python, với mục đích hiện thực hóa một bộ công cụ từ thu thập dữ liệu, làm sạch dữ liệu đến lưu trữ dữ liệu cho dữ liệu cơ bản, dữ liệu giá thời gian thực và lịch sử, dữ liệu phái sinh của các sản phẩm tài chính như cổ phiếu, hợp đồng tương lai, quyền chọn, quỹ, ngoại hối, trái phiếu, chỉ số, tiền điện tử, chủ yếu được sử dụng cho mục đích nghiên cứu học thuật.
MCP Montano Server

ETH Price Current Server
A minimal Model Context Protocol (MCP) server that fetches the current Ethereum (ETH) price in USD. Data source: the public CoinGecko API (no API key required). This MCP is designed to simulate malicious behavior, specifically an attempt to mislead LLM to return incorrect results.
A MCP server for Godot RAG
Máy chủ MCP này được sử dụng để cung cấp tài liệu Godot cho mô hình Godot RAG.

MCP Demo Server
A minimal fastmcp demonstration server that provides a simple addition tool through the MCP protocol, supporting deployment via Docker with multiple transport modes.
mcpserver-semantickernel-client-demo
Chắc chắn rồi, đây là bản dịch tiếng Việt của câu đó: **Đang trình bày một cách triển khai cực kỳ đơn giản của một máy chủ MCP C# được lưu trữ bằng Aspire và được sử dụng bởi Semantic Kernel.**

Taximail
mcp-server-newbie
MCP Server Tutorial
Cars MCP Server
Okay, here's a basic example of how you might set up an MCP (Message Channel Platform) server using Spring AI, along with explanations to help you understand the key components. This example focuses on the core concepts and assumes you have a basic understanding of Spring Boot and Spring AI. **Conceptual Overview** The idea is to create a simple server that: 1. **Receives Messages:** Accepts messages from clients (e.g., via HTTP). 2. **Uses Spring AI:** Leverages Spring AI to process the message (e.g., generate a response, extract information). 3. **Sends a Response:** Returns a response to the client. **Code Example (Simplified)** ```java // Dependencies (pom.xml or build.gradle) // - spring-boot-starter-web // - spring-ai-spring-boot-starter (and the specific AI provider you want, e.g., OpenAI) import org.springframework.ai.client.AiClient; import org.springframework.ai.prompt.PromptTemplate; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RestController; import java.util.Map; @SpringBootApplication public class McpServerApplication { public static void main(String[] args) { SpringApplication.run(McpServerApplication.class, args); } } @RestController class MessageController { @Autowired private AiClient aiClient; @PostMapping("/message") public String processMessage(@RequestBody String userMessage) { // 1. Create a prompt for the AI model. This is crucial! String promptTemplateText = "You are a helpful assistant. The user's message is: {userMessage}"; PromptTemplate promptTemplate = new PromptTemplate(promptTemplateText); Map<String, Object> model = Map.of("userMessage", userMessage); // 2. Call the AI model using Spring AI. String response = aiClient.generate(promptTemplate.render(model)); // 3. Return the AI's response. return response; } } ``` **Explanation:** 1. **Dependencies:** Make sure you have the necessary dependencies in your `pom.xml` (Maven) or `build.gradle` (Gradle) file. The key ones are: * `spring-boot-starter-web`: For creating a web server (handling HTTP requests). * `spring-ai-spring-boot-starter`: The core Spring AI starter. * `spring-ai-openai-spring-boot-starter` (or similar): A starter for a specific AI provider (e.g., OpenAI, Azure OpenAI, Ollama). You'll need to choose one and configure it. 2. **`McpServerApplication`:** This is the main Spring Boot application class. It's responsible for starting the Spring Boot application. 3. **`MessageController`:** * `@RestController`: Marks this class as a REST controller, meaning it handles incoming HTTP requests. * `@Autowired private AiClient aiClient;`: This injects the `AiClient` bean, which is the main interface for interacting with the AI model. Spring AI automatically configures this based on your chosen AI provider. * `@PostMapping("/message")`: This maps the `/message` endpoint to the `processMessage` method. It handles HTTP POST requests to this endpoint. * `@RequestBody String userMessage`: This extracts the message sent in the body of the HTTP request and binds it to the `userMessage` variable. * **Prompt Engineering:** This is the most important part. The `promptTemplateText` defines the prompt that will be sent to the AI model. It includes a placeholder `{userMessage}` where the user's message will be inserted. Good prompt engineering is crucial for getting good results from the AI model. * `PromptTemplate promptTemplate = new PromptTemplate(promptTemplateText);`: Creates a `PromptTemplate` object from the text. * `Map<String, Object> model = Map.of("userMessage", userMessage);`: Creates a map to hold the values that will be substituted into the prompt template. * `String response = aiClient.generate(promptTemplate.render(model));`: This is where the magic happens. It calls the `generate` method of the `AiClient` to send the prompt to the AI model and get a response. `promptTemplate.render(model)` fills in the placeholders in the prompt with the actual values. * `return response;`: Returns the AI's response as the HTTP response. **Configuration (application.properties or application.yml)** You'll need to configure Spring AI with your chosen AI provider's credentials. Here's an example for OpenAI: ```properties spring.ai.openai.api-key=YOUR_OPENAI_API_KEY ``` Replace `YOUR_OPENAI_API_KEY` with your actual OpenAI API key. You'll get this from the OpenAI website after creating an account. The exact configuration properties will vary depending on the AI provider you choose. **How to Run It** 1. **Create a Spring Boot project:** Use Spring Initializr (start.spring.io) to create a new Spring Boot project with the necessary dependencies (Web, Spring AI, and your chosen AI provider). 2. **Copy the code:** Copy the code above into your project. 3. **Configure your AI provider:** Add the configuration properties to your `application.properties` or `application.yml` file. 4. **Run the application:** Run the Spring Boot application. 5. **Send a message:** Use a tool like `curl` or Postman to send a POST request to `http://localhost:8080/message` with a JSON body containing your message. For example: ```bash curl -X POST -H "Content-Type: text/plain" -d "Hello, can you tell me a joke?" http://localhost:8080/message ``` **Important Considerations and Improvements** * **Error Handling:** Add error handling to catch exceptions that might occur during AI processing (e.g., API errors, rate limits). * **Prompt Engineering:** Experiment with different prompts to get the best results from the AI model. The prompt is the key to controlling the AI's behavior. * **Security:** If you're handling sensitive data, implement proper security measures (authentication, authorization, encryption). * **Asynchronous Processing:** For more complex scenarios, consider using asynchronous processing (e.g., Spring's `@Async` annotation or a message queue) to avoid blocking the main thread. * **Data Validation:** Validate the incoming messages to prevent malicious input. * **Logging:** Add logging to track requests, responses, and errors. * **More Complex Data Structures:** Instead of just sending a plain string, you can send more complex JSON objects in the request body and process them in the `processMessage` method. This allows you to pass more structured information to the AI model. * **Streaming:** For long responses, consider using Spring AI's streaming capabilities to send the response to the client in chunks. This can improve the user experience. **Vietnamese Translation of Key Concepts** * **MCP (Message Channel Platform):** Nền tảng kênh tin nhắn * **Spring AI:** Spring AI (Không dịch, giữ nguyên tên) * **AI Model:** Mô hình AI * **Prompt:** Lời nhắc, mồi (trong ngữ cảnh AI) * **Prompt Engineering:** Kỹ thuật tạo lời nhắc, kỹ thuật mồi * **API Key:** Khóa API * **Endpoint:** Điểm cuối (API) * **Request Body:** Nội dung yêu cầu (HTTP) * **Response:** Phản hồi * **Asynchronous Processing:** Xử lý bất đồng bộ * **Message Queue:** Hàng đợi tin nhắn * **Authentication:** Xác thực * **Authorization:** Ủy quyền * **Encryption:** Mã hóa * **Data Validation:** Xác thực dữ liệu * **Logging:** Ghi nhật ký This example provides a starting point for building a basic MCP server with Spring AI. Remember to adapt it to your specific needs and requirements. Good luck!

Remote MCP Server on Cloudflare
A deployable Model Context Protocol server on Cloudflare Workers that enables AI models to access custom tools without authentication requirements.

MCP Weather Server
Enables users to retrieve current weather alerts for US states and detailed weather forecasts by geographic coordinates using the US National Weather Service API. Built with Node.js and TypeScript following Model Context Protocol standards for seamless LLM integration.
mcp_server
Okay, I understand. You want me to describe how to implement a weather MCP (Message Passing Communication) server that can be called by a client IDE like Cursor. Here's a breakdown of the implementation, covering the key aspects: **1. Understanding the Requirements** * **MCP (Message Passing Communication):** This implies a structured way for the client (Cursor) and the server to exchange information. We need to define a protocol for the messages. Common choices include: * **JSON:** Human-readable, easy to parse, and widely supported. Good for simple data structures. * **Protocol Buffers (protobuf):** More efficient (smaller messages, faster parsing), but requires defining schemas. Better for complex data or performance-critical applications. * **XML:** Verbose, but well-established. Less common for new projects. * **Weather Data:** The server needs to fetch weather information from a reliable source. Popular options include: * **OpenWeatherMap:** Free and paid tiers, provides current weather, forecasts, historical data. Requires an API key. * **WeatherAPI.com:** Similar to OpenWeatherMap, offers various plans. * **AccuWeather:** Commercial API. * **National Weather Service (NWS) (US):** Free, but data format can be less consistent. * **Client (Cursor):** The server needs to be accessible from the Cursor IDE. This means it should expose an endpoint that Cursor can call (e.g., an HTTP endpoint). * **Error Handling:** Robust error handling is crucial. The server should gracefully handle invalid requests, API errors, and network issues. * **Scalability (Optional):** If you anticipate many clients, consider designing the server to be scalable (e.g., using asynchronous operations, load balancing). **2. Technology Stack** Here's a suggested stack: * **Language:** Python is a good choice due to its ease of use, extensive libraries, and suitability for web development. * **Web Framework:** Flask or FastAPI are excellent for creating lightweight web APIs. FastAPI is generally preferred for its performance and automatic data validation. * **HTTP Library:** `requests` (for making API calls to weather services). * **JSON Library:** `json` (built-in to Python). * **Asynchronous Library (Optional):** `asyncio` and `aiohttp` (for handling concurrent requests). **3. Implementation Steps** ```python # weather_server.py (Example using Flask) from flask import Flask, request, jsonify import requests import os # For accessing environment variables from dotenv import load_dotenv load_dotenv() # Load environment variables from .env file app = Flask(__name__) # Replace with your actual API key from OpenWeatherMap or another provider WEATHER_API_KEY = os.getenv("WEATHER_API_KEY") WEATHER_API_URL = "https://api.openweathermap.org/data/2.5/weather" # Example URL def get_weather_data(city): """Fetches weather data from the OpenWeatherMap API.""" try: params = { 'q': city, 'appid': WEATHER_API_KEY, 'units': 'metric' # Use Celsius } response = requests.get(WEATHER_API_URL, params=params) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) data = response.json() return data except requests.exceptions.RequestException as e: print(f"Error fetching weather data: {e}") return None except Exception as e: print(f"An unexpected error occurred: {e}") return None @app.route('/weather', methods=['GET']) def weather(): """ Handles the /weather endpoint. Expects a 'city' query parameter. Returns weather data as JSON. """ city = request.args.get('city') if not city: return jsonify({'error': 'City parameter is required'}), 400 weather_data = get_weather_data(city) if weather_data: # Extract relevant information (customize as needed) temperature = weather_data['main']['temp'] description = weather_data['weather'][0]['description'] humidity = weather_data['main']['humidity'] wind_speed = weather_data['wind']['speed'] return jsonify({ 'city': city, 'temperature': temperature, 'description': description, 'humidity': humidity, 'wind_speed': wind_speed, 'source': 'OpenWeatherMap' # Indicate the data source }) else: return jsonify({'error': 'Failed to retrieve weather data for the specified city'}), 500 if __name__ == '__main__': app.run(debug=True, host='0.0.0.0', port=5000) # Listen on all interfaces ``` **Explanation:** 1. **Imports:** Imports necessary libraries (Flask, requests, json). 2. **.env and API Key:** Loads the OpenWeatherMap API key from an environment variable. **Important:** Never hardcode API keys directly into your code. Use environment variables or a configuration file. Create a `.env` file in the same directory as your script and add `WEATHER_API_KEY=YOUR_API_KEY`. Make sure to add `.env` to your `.gitignore` file to prevent committing your API key to your repository. 3. **`get_weather_data(city)` Function:** * Takes a city name as input. * Constructs the API request URL with the city and API key. * Uses the `requests` library to make the API call. * Handles potential errors (network issues, invalid API key, etc.). * Parses the JSON response from the weather API. * Returns the parsed data or `None` if an error occurred. 4. **`/weather` Route:** * Defines a Flask route `/weather` that handles GET requests. * Retrieves the `city` parameter from the query string (e.g., `/weather?city=London`). * Calls the `get_weather_data()` function to fetch the weather. * Extracts the relevant weather information (temperature, description, etc.) from the API response. **Customize this part to extract the specific data you need.** * Returns the weather data as a JSON response. * Includes error handling to return appropriate HTTP status codes (400 for bad request, 500 for server error). 5. **`if __name__ == '__main__':` Block:** * Starts the Flask development server when the script is run directly. * `debug=True` enables debugging mode (useful during development). **Disable this in production.** * `host='0.0.0.0'` makes the server accessible from any IP address (important if you're running it on a remote machine). * `port=5000` specifies the port the server will listen on. **4. Running the Server** 1. **Install Dependencies:** ```bash pip install flask requests python-dotenv ``` 2. **Set Environment Variable:** Create a `.env` file with `WEATHER_API_KEY=YOUR_API_KEY` (replace `YOUR_API_KEY` with your actual API key). 3. **Run the Server:** ```bash python weather_server.py ``` The server will start and listen on `http://0.0.0.0:5000`. **5. Client-Side (Cursor IDE) Integration** You'll need to write code within the Cursor IDE to call the weather server's API. Here's a conceptual example (the exact implementation will depend on Cursor's capabilities): ```javascript // Example JavaScript code (within Cursor) async function getWeather(city) { const apiUrl = `http://localhost:5000/weather?city=${city}`; // Replace with your server's address try { const response = await fetch(apiUrl); if (!response.ok) { throw new Error(`HTTP error! Status: ${response.status}`); } const data = await response.json(); return data; } catch (error) { console.error("Error fetching weather:", error); return null; } } // Example usage: async function displayWeather(city) { const weatherData = await getWeather(city); if (weatherData) { console.log(`Weather in ${weatherData.city}:`); console.log(`Temperature: ${weatherData.temperature}°C`); console.log(`Description: ${weatherData.description}`); console.log(`Humidity: ${weatherData.humidity}%`); console.log(`Wind Speed: ${weatherData.wind_speed} m/s`); } else { console.log("Failed to get weather information."); } } // Call the function (e.g., when a button is clicked or a command is executed) displayWeather("London"); ``` **Explanation of Client-Side Code:** 1. **`getWeather(city)` Function:** * Constructs the API URL to call the weather server. * Uses `fetch` (or a similar HTTP library available in Cursor) to make the API request. * Handles potential errors (network issues, server errors). * Parses the JSON response from the server. * Returns the parsed data or `null` if an error occurred. 2. **`displayWeather(city)` Function:** * Calls the `getWeather()` function to fetch the weather data. * Displays the weather information in the Cursor IDE (e.g., in a console, a text editor, or a custom UI element). 3. **Example Usage:** * Shows how to call the `displayWeather()` function with a city name. You'll need to integrate this into Cursor's event handling mechanism (e.g., when a user types a command or clicks a button). **6. Key Considerations and Improvements** * **Error Handling:** Implement comprehensive error handling on both the server and the client. Log errors to a file or a monitoring system. Provide informative error messages to the user. * **Data Validation:** Validate the input data on the server (e.g., check if the city name is valid). Use a library like `marshmallow` or `pydantic` for data validation. * **Caching:** Cache the weather data on the server to reduce the number of API calls to the weather service. Use a caching library like `cachetools` or `redis`. * **Asynchronous Operations:** Use asynchronous operations (e.g., with `asyncio` and `aiohttp`) to handle concurrent requests efficiently, especially if you expect many clients. * **Security:** If you're handling sensitive data, implement appropriate security measures (e.g., HTTPS, authentication, authorization). * **Configuration:** Use a configuration file (e.g., a YAML or JSON file) to store the server's settings (API key, port number, etc.). * **Logging:** Implement robust logging to track server activity and debug issues. * **Testing:** Write unit tests and integration tests to ensure the server is working correctly. * **Rate Limiting:** Be mindful of the weather API's rate limits. Implement rate limiting on your server to avoid exceeding the limits. * **API Key Security:** Never commit your API key to your repository. Use environment variables or a secure configuration management system. * **Deployment:** Consider deploying the server to a cloud platform (e.g., AWS, Google Cloud, Azure) for scalability and reliability. **Example using FastAPI (Recommended for Performance):** ```python # weather_server_fastapi.py from fastapi import FastAPI, HTTPException, Query from pydantic import BaseModel import requests import os from dotenv import load_dotenv load_dotenv() app = FastAPI() WEATHER_API_KEY = os.getenv("WEATHER_API_KEY") WEATHER_API_URL = "https://api.openweathermap.org/data/2.5/weather" class WeatherResponse(BaseModel): city: str temperature: float description: str humidity: int wind_speed: float source: str def get_weather_data(city: str): try: params = { 'q': city, 'appid': WEATHER_API_KEY, 'units': 'metric' } response = requests.get(WEATHER_API_URL, params=params) response.raise_for_status() data = response.json() return data except requests.exceptions.RequestException as e: print(f"Error fetching weather data: {e}") return None except Exception as e: print(f"An unexpected error occurred: {e}") return None @app.get("/weather", response_model=WeatherResponse) async def weather(city: str = Query(..., title="City", description="The city to get weather for")): weather_data = get_weather_data(city) if weather_data: temperature = weather_data['main']['temp'] description = weather_data['weather'][0]['description'] humidity = weather_data['main']['humidity'] wind_speed = weather_data['wind']['speed'] return WeatherResponse( city=city, temperature=temperature, description=description, humidity=humidity, wind_speed=wind_speed, source='OpenWeatherMap' ) else: raise HTTPException(status_code=500, detail="Failed to retrieve weather data for the specified city") if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` **Key Differences with FastAPI:** * **FastAPI:** Uses FastAPI instead of Flask. FastAPI is generally faster and provides automatic data validation using Pydantic. * **Pydantic:** Uses Pydantic `BaseModel` to define the structure of the weather response. This provides automatic data validation and serialization. * **Type Hints:** Uses type hints (e.g., `city: str`) for better code readability and maintainability. * **Query Parameters:** Uses `Query` to define the `city` parameter as a query parameter. * **HTTPException:** Uses `HTTPException` to raise HTTP errors with appropriate status codes and error messages. * **uvicorn:** Uses `uvicorn` as the ASGI server to run the FastAPI application. To run the FastAPI example: 1. **Install Dependencies:** ```bash pip install fastapi uvicorn requests python-dotenv ``` 2. **Set Environment Variable:** Create a `.env` file with `WEATHER_API_KEY=YOUR_API_KEY`. 3. **Run the Server:** ```bash python weather_server_fastapi.py ``` The server will start and listen on `http://0.0.0.0:8000`. You can access the API documentation at `http://0.0.0.0:8000/docs`. This comprehensive guide should give you a solid foundation for implementing a weather MCP server that can be called by a client IDE like Cursor. Remember to adapt the code to your specific needs and requirements. Good luck!

System Information MCP Server
Provides comprehensive system diagnostics and hardware analysis through 10 specialized tools for troubleshooting and environment monitoring. Offers targeted information gathering for CPU, memory, network, storage, processes, and security analysis across Windows, macOS, and Linux platforms.
SQL-Server-MCP

UK Bus Departures MCP Server
Enables users to get real-time UK bus departure information and validate bus stop ATCO codes by scraping bustimes.org. Provides structured data including service numbers, destinations, scheduled and expected departure times for any UK bus stop.
🦉 OWL x WhatsApp MCP Server Integration
Chroma MCP Server
Máy chủ MCP để tích hợp ChromaDB vào Cursor với các mô hình AI tương thích MCP
ResembleMCP
Thử thách triển khai máy chủ MCP của Resemble AI

Canteen MCP
A Model Context Protocol server that provides structured access to canteen lunch menus for specific dates through a simple API integration.

MCP Tailwind Gemini Server
Advanced Model Context Protocol server that integrates Gemini AI with Tailwind CSS, providing intelligent component generation, class optimization, and cross-platform design assistance across major development environments.

Freedcamp MCP Server
A Model Context Protocol server that enables seamless integration with Freedcamp API for enterprise-level project management with advanced filtering, full CRUD operations, and extensive customization options.

Stampchain MCP Server
A Model Context Protocol server that enables interaction with Bitcoin Stamps data via the Stampchain API, providing tools for querying stamp information, collections, and blockchain data without requiring authentication.

SAP OData to MCP Server
Transforms SAP S/4HANA or ECC systems into conversational AI interfaces by exposing all OData services as dynamic MCP tools. Enables natural language interactions with ERP data for querying, creating, updating, and deleting business entities through SAP BTP integration.