Discover Awesome MCP Servers

Extend your agent with 28,665 capabilities via MCP servers.

All28,665
MCP Demo Server

MCP Demo Server

A minimal fastmcp demonstration server that provides a simple addition tool through the MCP protocol, supporting deployment via Docker with multiple transport modes.

RealTest MCP Server

RealTest MCP Server

Provides structured access to RealTest backtesting documentation and example scripts to help LLM agents generate accurate RealScript code. It offers tools for semantic search, authoritative function references, and verified script retrieval to prevent hallucinations.

Applitools MCP Server

Applitools MCP Server

Enables AI assistants to set up, manage, and analyze visual tests using Applitools Eyes within Playwright JavaScript and TypeScript projects. It supports adding visual checkpoints, configuring cross-browser testing via Ultrafast Grid, and retrieving structured test results.

Somnia MCP Server

Somnia MCP Server

Enables AI agents to interact with the Somnia blockchain network, including documentation search, blockchain queries, wallet management, cryptographic signing, and on-chain operations.

Cars MCP Server

Cars MCP Server

Okay, here's a basic example of how you might set up an MCP (Message Channel Platform) server using Spring AI, along with explanations to help you understand the key components. This example focuses on the core concepts and assumes you have a basic understanding of Spring Boot and Spring AI. **Conceptual Overview** The idea is to create a simple server that: 1. **Receives Messages:** Accepts messages from clients (e.g., via HTTP). 2. **Uses Spring AI:** Leverages Spring AI to process the message (e.g., generate a response, extract information). 3. **Sends a Response:** Returns a response to the client. **Code Example (Simplified)** ```java // Dependencies (pom.xml or build.gradle) // - spring-boot-starter-web // - spring-ai-spring-boot-starter (and the specific AI provider you want, e.g., OpenAI) import org.springframework.ai.client.AiClient; import org.springframework.ai.prompt.PromptTemplate; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RestController; import java.util.Map; @SpringBootApplication public class McpServerApplication { public static void main(String[] args) { SpringApplication.run(McpServerApplication.class, args); } } @RestController class MessageController { @Autowired private AiClient aiClient; @PostMapping("/message") public String processMessage(@RequestBody String userMessage) { // 1. Create a prompt for the AI model. This is crucial! String promptTemplateText = "You are a helpful assistant. The user's message is: {userMessage}"; PromptTemplate promptTemplate = new PromptTemplate(promptTemplateText); Map<String, Object> model = Map.of("userMessage", userMessage); // 2. Call the AI model using Spring AI. String response = aiClient.generate(promptTemplate.render(model)); // 3. Return the AI's response. return response; } } ``` **Explanation:** 1. **Dependencies:** Make sure you have the necessary dependencies in your `pom.xml` (Maven) or `build.gradle` (Gradle) file. The key ones are: * `spring-boot-starter-web`: For creating a web server (handling HTTP requests). * `spring-ai-spring-boot-starter`: The core Spring AI starter. * `spring-ai-openai-spring-boot-starter` (or similar): A starter for a specific AI provider (e.g., OpenAI, Azure OpenAI, Ollama). You'll need to choose one and configure it. 2. **`McpServerApplication`:** This is the main Spring Boot application class. It's responsible for starting the Spring Boot application. 3. **`MessageController`:** * `@RestController`: Marks this class as a REST controller, meaning it handles incoming HTTP requests. * `@Autowired private AiClient aiClient;`: This injects the `AiClient` bean, which is the main interface for interacting with the AI model. Spring AI automatically configures this based on your chosen AI provider. * `@PostMapping("/message")`: This maps the `/message` endpoint to the `processMessage` method. It handles HTTP POST requests to this endpoint. * `@RequestBody String userMessage`: This extracts the message sent in the body of the HTTP request and binds it to the `userMessage` variable. * **Prompt Engineering:** This is the most important part. The `promptTemplateText` defines the prompt that will be sent to the AI model. It includes a placeholder `{userMessage}` where the user's message will be inserted. Good prompt engineering is crucial for getting good results from the AI model. * `PromptTemplate promptTemplate = new PromptTemplate(promptTemplateText);`: Creates a `PromptTemplate` object from the text. * `Map<String, Object> model = Map.of("userMessage", userMessage);`: Creates a map to hold the values that will be substituted into the prompt template. * `String response = aiClient.generate(promptTemplate.render(model));`: This is where the magic happens. It calls the `generate` method of the `AiClient` to send the prompt to the AI model and get a response. `promptTemplate.render(model)` fills in the placeholders in the prompt with the actual values. * `return response;`: Returns the AI's response as the HTTP response. **Configuration (application.properties or application.yml)** You'll need to configure Spring AI with your chosen AI provider's credentials. Here's an example for OpenAI: ```properties spring.ai.openai.api-key=YOUR_OPENAI_API_KEY ``` Replace `YOUR_OPENAI_API_KEY` with your actual OpenAI API key. You'll get this from the OpenAI website after creating an account. The exact configuration properties will vary depending on the AI provider you choose. **How to Run It** 1. **Create a Spring Boot project:** Use Spring Initializr (start.spring.io) to create a new Spring Boot project with the necessary dependencies (Web, Spring AI, and your chosen AI provider). 2. **Copy the code:** Copy the code above into your project. 3. **Configure your AI provider:** Add the configuration properties to your `application.properties` or `application.yml` file. 4. **Run the application:** Run the Spring Boot application. 5. **Send a message:** Use a tool like `curl` or Postman to send a POST request to `http://localhost:8080/message` with a JSON body containing your message. For example: ```bash curl -X POST -H "Content-Type: text/plain" -d "Hello, can you tell me a joke?" http://localhost:8080/message ``` **Important Considerations and Improvements** * **Error Handling:** Add error handling to catch exceptions that might occur during AI processing (e.g., API errors, rate limits). * **Prompt Engineering:** Experiment with different prompts to get the best results from the AI model. The prompt is the key to controlling the AI's behavior. * **Security:** If you're handling sensitive data, implement proper security measures (authentication, authorization, encryption). * **Asynchronous Processing:** For more complex scenarios, consider using asynchronous processing (e.g., Spring's `@Async` annotation or a message queue) to avoid blocking the main thread. * **Data Validation:** Validate the incoming messages to prevent malicious input. * **Logging:** Add logging to track requests, responses, and errors. * **More Complex Data Structures:** Instead of just sending a plain string, you can send more complex JSON objects in the request body and process them in the `processMessage` method. This allows you to pass more structured information to the AI model. * **Streaming:** For long responses, consider using Spring AI's streaming capabilities to send the response to the client in chunks. This can improve the user experience. **Vietnamese Translation of Key Concepts** * **MCP (Message Channel Platform):** Nền tảng kênh tin nhắn * **Spring AI:** Spring AI (Không dịch, giữ nguyên tên) * **AI Model:** Mô hình AI * **Prompt:** Lời nhắc, mồi (trong ngữ cảnh AI) * **Prompt Engineering:** Kỹ thuật tạo lời nhắc, kỹ thuật mồi * **API Key:** Khóa API * **Endpoint:** Điểm cuối (API) * **Request Body:** Nội dung yêu cầu (HTTP) * **Response:** Phản hồi * **Asynchronous Processing:** Xử lý bất đồng bộ * **Message Queue:** Hàng đợi tin nhắn * **Authentication:** Xác thực * **Authorization:** Ủy quyền * **Encryption:** Mã hóa * **Data Validation:** Xác thực dữ liệu * **Logging:** Ghi nhật ký This example provides a starting point for building a basic MCP server with Spring AI. Remember to adapt it to your specific needs and requirements. Good luck!

FFmpeg MCP

FFmpeg MCP

Enables video and audio processing through FFmpeg, supporting format conversion, compression, trimming, audio extraction, frame extraction, video merging, and subtitle burning through natural language commands.

MCP Weather Server

MCP Weather Server

Enables users to retrieve current weather alerts for US states and detailed weather forecasts by geographic coordinates using the US National Weather Service API. Built with Node.js and TypeScript following Model Context Protocol standards for seamless LLM integration.

fastf1-mcp-server

fastf1-mcp-server

MCP server for Formula 1 data via the FastF1 library. Ask Claude (or any MCP-compatible client) about race results, lap times, telemetry, standings, pit stops, and qualifying — with historical data back to 1950 via the Ergast API.

mcp_server

mcp_server

Okay, I understand. You want me to describe how to implement a weather MCP (Message Passing Communication) server that can be called by a client IDE like Cursor. Here's a breakdown of the implementation, covering the key aspects: **1. Understanding the Requirements** * **MCP (Message Passing Communication):** This implies a structured way for the client (Cursor) and the server to exchange information. We need to define a protocol for the messages. Common choices include: * **JSON:** Human-readable, easy to parse, and widely supported. Good for simple data structures. * **Protocol Buffers (protobuf):** More efficient (smaller messages, faster parsing), but requires defining schemas. Better for complex data or performance-critical applications. * **XML:** Verbose, but well-established. Less common for new projects. * **Weather Data:** The server needs to fetch weather information from a reliable source. Popular options include: * **OpenWeatherMap:** Free and paid tiers, provides current weather, forecasts, historical data. Requires an API key. * **WeatherAPI.com:** Similar to OpenWeatherMap, offers various plans. * **AccuWeather:** Commercial API. * **National Weather Service (NWS) (US):** Free, but data format can be less consistent. * **Client (Cursor):** The server needs to be accessible from the Cursor IDE. This means it should expose an endpoint that Cursor can call (e.g., an HTTP endpoint). * **Error Handling:** Robust error handling is crucial. The server should gracefully handle invalid requests, API errors, and network issues. * **Scalability (Optional):** If you anticipate many clients, consider designing the server to be scalable (e.g., using asynchronous operations, load balancing). **2. Technology Stack** Here's a suggested stack: * **Language:** Python is a good choice due to its ease of use, extensive libraries, and suitability for web development. * **Web Framework:** Flask or FastAPI are excellent for creating lightweight web APIs. FastAPI is generally preferred for its performance and automatic data validation. * **HTTP Library:** `requests` (for making API calls to weather services). * **JSON Library:** `json` (built-in to Python). * **Asynchronous Library (Optional):** `asyncio` and `aiohttp` (for handling concurrent requests). **3. Implementation Steps** ```python # weather_server.py (Example using Flask) from flask import Flask, request, jsonify import requests import os # For accessing environment variables from dotenv import load_dotenv load_dotenv() # Load environment variables from .env file app = Flask(__name__) # Replace with your actual API key from OpenWeatherMap or another provider WEATHER_API_KEY = os.getenv("WEATHER_API_KEY") WEATHER_API_URL = "https://api.openweathermap.org/data/2.5/weather" # Example URL def get_weather_data(city): """Fetches weather data from the OpenWeatherMap API.""" try: params = { 'q': city, 'appid': WEATHER_API_KEY, 'units': 'metric' # Use Celsius } response = requests.get(WEATHER_API_URL, params=params) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) data = response.json() return data except requests.exceptions.RequestException as e: print(f"Error fetching weather data: {e}") return None except Exception as e: print(f"An unexpected error occurred: {e}") return None @app.route('/weather', methods=['GET']) def weather(): """ Handles the /weather endpoint. Expects a 'city' query parameter. Returns weather data as JSON. """ city = request.args.get('city') if not city: return jsonify({'error': 'City parameter is required'}), 400 weather_data = get_weather_data(city) if weather_data: # Extract relevant information (customize as needed) temperature = weather_data['main']['temp'] description = weather_data['weather'][0]['description'] humidity = weather_data['main']['humidity'] wind_speed = weather_data['wind']['speed'] return jsonify({ 'city': city, 'temperature': temperature, 'description': description, 'humidity': humidity, 'wind_speed': wind_speed, 'source': 'OpenWeatherMap' # Indicate the data source }) else: return jsonify({'error': 'Failed to retrieve weather data for the specified city'}), 500 if __name__ == '__main__': app.run(debug=True, host='0.0.0.0', port=5000) # Listen on all interfaces ``` **Explanation:** 1. **Imports:** Imports necessary libraries (Flask, requests, json). 2. **.env and API Key:** Loads the OpenWeatherMap API key from an environment variable. **Important:** Never hardcode API keys directly into your code. Use environment variables or a configuration file. Create a `.env` file in the same directory as your script and add `WEATHER_API_KEY=YOUR_API_KEY`. Make sure to add `.env` to your `.gitignore` file to prevent committing your API key to your repository. 3. **`get_weather_data(city)` Function:** * Takes a city name as input. * Constructs the API request URL with the city and API key. * Uses the `requests` library to make the API call. * Handles potential errors (network issues, invalid API key, etc.). * Parses the JSON response from the weather API. * Returns the parsed data or `None` if an error occurred. 4. **`/weather` Route:** * Defines a Flask route `/weather` that handles GET requests. * Retrieves the `city` parameter from the query string (e.g., `/weather?city=London`). * Calls the `get_weather_data()` function to fetch the weather. * Extracts the relevant weather information (temperature, description, etc.) from the API response. **Customize this part to extract the specific data you need.** * Returns the weather data as a JSON response. * Includes error handling to return appropriate HTTP status codes (400 for bad request, 500 for server error). 5. **`if __name__ == '__main__':` Block:** * Starts the Flask development server when the script is run directly. * `debug=True` enables debugging mode (useful during development). **Disable this in production.** * `host='0.0.0.0'` makes the server accessible from any IP address (important if you're running it on a remote machine). * `port=5000` specifies the port the server will listen on. **4. Running the Server** 1. **Install Dependencies:** ```bash pip install flask requests python-dotenv ``` 2. **Set Environment Variable:** Create a `.env` file with `WEATHER_API_KEY=YOUR_API_KEY` (replace `YOUR_API_KEY` with your actual API key). 3. **Run the Server:** ```bash python weather_server.py ``` The server will start and listen on `http://0.0.0.0:5000`. **5. Client-Side (Cursor IDE) Integration** You'll need to write code within the Cursor IDE to call the weather server's API. Here's a conceptual example (the exact implementation will depend on Cursor's capabilities): ```javascript // Example JavaScript code (within Cursor) async function getWeather(city) { const apiUrl = `http://localhost:5000/weather?city=${city}`; // Replace with your server's address try { const response = await fetch(apiUrl); if (!response.ok) { throw new Error(`HTTP error! Status: ${response.status}`); } const data = await response.json(); return data; } catch (error) { console.error("Error fetching weather:", error); return null; } } // Example usage: async function displayWeather(city) { const weatherData = await getWeather(city); if (weatherData) { console.log(`Weather in ${weatherData.city}:`); console.log(`Temperature: ${weatherData.temperature}°C`); console.log(`Description: ${weatherData.description}`); console.log(`Humidity: ${weatherData.humidity}%`); console.log(`Wind Speed: ${weatherData.wind_speed} m/s`); } else { console.log("Failed to get weather information."); } } // Call the function (e.g., when a button is clicked or a command is executed) displayWeather("London"); ``` **Explanation of Client-Side Code:** 1. **`getWeather(city)` Function:** * Constructs the API URL to call the weather server. * Uses `fetch` (or a similar HTTP library available in Cursor) to make the API request. * Handles potential errors (network issues, server errors). * Parses the JSON response from the server. * Returns the parsed data or `null` if an error occurred. 2. **`displayWeather(city)` Function:** * Calls the `getWeather()` function to fetch the weather data. * Displays the weather information in the Cursor IDE (e.g., in a console, a text editor, or a custom UI element). 3. **Example Usage:** * Shows how to call the `displayWeather()` function with a city name. You'll need to integrate this into Cursor's event handling mechanism (e.g., when a user types a command or clicks a button). **6. Key Considerations and Improvements** * **Error Handling:** Implement comprehensive error handling on both the server and the client. Log errors to a file or a monitoring system. Provide informative error messages to the user. * **Data Validation:** Validate the input data on the server (e.g., check if the city name is valid). Use a library like `marshmallow` or `pydantic` for data validation. * **Caching:** Cache the weather data on the server to reduce the number of API calls to the weather service. Use a caching library like `cachetools` or `redis`. * **Asynchronous Operations:** Use asynchronous operations (e.g., with `asyncio` and `aiohttp`) to handle concurrent requests efficiently, especially if you expect many clients. * **Security:** If you're handling sensitive data, implement appropriate security measures (e.g., HTTPS, authentication, authorization). * **Configuration:** Use a configuration file (e.g., a YAML or JSON file) to store the server's settings (API key, port number, etc.). * **Logging:** Implement robust logging to track server activity and debug issues. * **Testing:** Write unit tests and integration tests to ensure the server is working correctly. * **Rate Limiting:** Be mindful of the weather API's rate limits. Implement rate limiting on your server to avoid exceeding the limits. * **API Key Security:** Never commit your API key to your repository. Use environment variables or a secure configuration management system. * **Deployment:** Consider deploying the server to a cloud platform (e.g., AWS, Google Cloud, Azure) for scalability and reliability. **Example using FastAPI (Recommended for Performance):** ```python # weather_server_fastapi.py from fastapi import FastAPI, HTTPException, Query from pydantic import BaseModel import requests import os from dotenv import load_dotenv load_dotenv() app = FastAPI() WEATHER_API_KEY = os.getenv("WEATHER_API_KEY") WEATHER_API_URL = "https://api.openweathermap.org/data/2.5/weather" class WeatherResponse(BaseModel): city: str temperature: float description: str humidity: int wind_speed: float source: str def get_weather_data(city: str): try: params = { 'q': city, 'appid': WEATHER_API_KEY, 'units': 'metric' } response = requests.get(WEATHER_API_URL, params=params) response.raise_for_status() data = response.json() return data except requests.exceptions.RequestException as e: print(f"Error fetching weather data: {e}") return None except Exception as e: print(f"An unexpected error occurred: {e}") return None @app.get("/weather", response_model=WeatherResponse) async def weather(city: str = Query(..., title="City", description="The city to get weather for")): weather_data = get_weather_data(city) if weather_data: temperature = weather_data['main']['temp'] description = weather_data['weather'][0]['description'] humidity = weather_data['main']['humidity'] wind_speed = weather_data['wind']['speed'] return WeatherResponse( city=city, temperature=temperature, description=description, humidity=humidity, wind_speed=wind_speed, source='OpenWeatherMap' ) else: raise HTTPException(status_code=500, detail="Failed to retrieve weather data for the specified city") if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` **Key Differences with FastAPI:** * **FastAPI:** Uses FastAPI instead of Flask. FastAPI is generally faster and provides automatic data validation using Pydantic. * **Pydantic:** Uses Pydantic `BaseModel` to define the structure of the weather response. This provides automatic data validation and serialization. * **Type Hints:** Uses type hints (e.g., `city: str`) for better code readability and maintainability. * **Query Parameters:** Uses `Query` to define the `city` parameter as a query parameter. * **HTTPException:** Uses `HTTPException` to raise HTTP errors with appropriate status codes and error messages. * **uvicorn:** Uses `uvicorn` as the ASGI server to run the FastAPI application. To run the FastAPI example: 1. **Install Dependencies:** ```bash pip install fastapi uvicorn requests python-dotenv ``` 2. **Set Environment Variable:** Create a `.env` file with `WEATHER_API_KEY=YOUR_API_KEY`. 3. **Run the Server:** ```bash python weather_server_fastapi.py ``` The server will start and listen on `http://0.0.0.0:8000`. You can access the API documentation at `http://0.0.0.0:8000/docs`. This comprehensive guide should give you a solid foundation for implementing a weather MCP server that can be called by a client IDE like Cursor. Remember to adapt the code to your specific needs and requirements. Good luck!

Model Context Protocol (MCP) + Spring Boot Integration

Model Context Protocol (MCP) + Spring Boot Integration

Đang thử nghiệm tính năng mới của máy chủ MCP bằng Spring Boot.

Expense Tracker MCP Server

Expense Tracker MCP Server

Enables AI assistants like Claude to manage personal expenses locally using SQLite. Supports adding, categorizing, summarizing expenses, setting budgets, and exporting data without cloud services.

Model Context Protocol (MCP) MSPaint App Automation

Model Context Protocol (MCP) MSPaint App Automation

Okay, this is a complex request that involves several parts: 1. **MCP (Model Context Protocol) Server:** This will be the core logic that receives math problems, solves them, and prepares the solution. 2. **MCP Client:** This will send the math problem to the server. 3. **Math Solving Logic:** The actual code to solve the math problem. For simplicity, I'll use a very basic example. 4. **MSPaint Integration:** This is the trickiest part. We'll need to generate an image (e.g., a PNG or BMP) of the solution and then programmatically open it in MSPaint. Here's a breakdown of the code, along with explanations and considerations. I'll provide Python code for both the server and client. Python is well-suited for this kind of task. **Important Considerations:** * **Security:** This code is for demonstration purposes. Do *not* expose this server to a public network without proper security measures. Executing arbitrary code from a remote client is a major security risk. * **Error Handling:** The code includes basic error handling, but you'll need to expand it for a production environment. * **Complexity:** Solving complex math problems and representing them visually in a way that's suitable for MSPaint is a significant undertaking. This example focuses on a very simple problem. * **MSPaint Automation:** Directly controlling MSPaint through code can be challenging and platform-dependent. The approach here is to create an image and then open it. **Code:** ```python # server.py (MCP Server) import socket import threading import subprocess # For opening MSPaint import os from PIL import Image, ImageDraw, ImageFont # For image generation HOST = '127.0.0.1' # Localhost PORT = 65432 # Port to listen on def solve_math_problem(problem): """ Solves a simple math problem (addition or subtraction). This is a placeholder; replace with more sophisticated logic. """ try: problem = problem.strip() if "+" in problem: num1, num2 = map(int, problem.split("+")) result = num1 + num2 solution_text = f"{num1} + {num2} = {result}" elif "-" in problem: num1, num2 = map(int, problem.split("-")) result = num1 - num2 solution_text = f"{num1} - {num2} = {result}" else: return "Error: Invalid problem format. Use 'number+number' or 'number-number'." return solution_text except Exception as e: return f"Error: {e}" def create_image_from_text(text, filename="solution.png"): """ Creates an image with the given text. """ image_width = 500 image_height = 200 image = Image.new("RGB", (image_width, image_height), "white") draw = ImageDraw.Draw(image) # Choose a font (you might need to adjust the path) try: font = ImageFont.truetype("arial.ttf", size=30) # Common font except IOError: font = ImageFont.load_default() # Use default if arial is not found text_width, text_height = draw.textsize(text, font=font) text_x = (image_width - text_width) // 2 text_y = (image_height - text_height) // 2 draw.text((text_x, text_y), text, fill="black", font=font) image.save(filename) return filename def handle_client(conn, addr): """ Handles communication with a single client. """ print(f"Connected by {addr}") with conn: while True: data = conn.recv(1024) if not data: break problem = data.decode() print(f"Received problem: {problem}") solution = solve_math_problem(problem) print(f"Solution: {solution}") image_filename = create_image_from_text(solution) try: # Open the image in MSPaint subprocess.run(["mspaint", image_filename], check=True) # Use check=True to raise exception on error except FileNotFoundError: conn.sendall(b"Error: MSPaint not found.") print("Error: MSPaint not found.") except subprocess.CalledProcessError as e: conn.sendall(f"Error opening MSPaint: {e}".encode()) print(f"Error opening MSPaint: {e}") except Exception as e: conn.sendall(f"Error: {e}".encode()) print(f"Error: {e}") conn.sendall(b"Solution displayed in MSPaint.") # Send confirmation to client os.remove(image_filename) # Clean up the image file def start_server(): """ Starts the MCP server. """ with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() print(f"Server listening on {HOST}:{PORT}") while True: conn, addr = s.accept() thread = threading.Thread(target=handle_client, args=(conn, addr)) thread.start() if __name__ == "__main__": start_server() ``` ```python # client.py (MCP Client) import socket HOST = '127.0.0.1' # The server's hostname or IP address PORT = 65432 # The port used by the server def send_problem(problem): """ Sends a math problem to the server and receives the response. """ with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: try: s.connect((HOST, PORT)) s.sendall(problem.encode()) data = s.recv(1024) print(f"Received: {data.decode()}") except ConnectionRefusedError: print("Error: Could not connect to the server. Make sure the server is running.") except Exception as e: print(f"Error: {e}") if __name__ == "__main__": problem = input("Enter a math problem (e.g., 5+3 or 10-2): ") send_problem(problem) ``` **Explanation:** * **`server.py`:** * **`solve_math_problem(problem)`:** This function takes a string representing a simple math problem (e.g., "5+3") and returns the solution as a string. **This is where you would implement more complex math solving logic.** * **`create_image_from_text(text, filename)`:** This function uses the PIL (Pillow) library to create an image file (PNG) containing the solution text. It handles font selection and text positioning. * **`handle_client(conn, addr)`:** This function handles the communication with a single client. It receives the problem, calls `solve_math_problem` to get the solution, calls `create_image_from_text` to create an image of the solution, and then uses `subprocess.run` to open the image in MSPaint. It also sends a confirmation message back to the client. Critically, it cleans up the image file after displaying it. * **`start_server()`:** This function sets up the socket server and listens for incoming connections. It creates a new thread for each client connection. * **`client.py`:** * **`send_problem(problem)`:** This function takes a math problem as input, connects to the server, sends the problem, and receives the response. **How to Run:** 1. **Install Pillow:** `pip install Pillow` 2. **Save the code:** Save the server code as `server.py` and the client code as `client.py`. 3. **Run the server:** Open a terminal or command prompt and run `python server.py`. 4. **Run the client:** Open another terminal or command prompt and run `python client.py`. Enter a math problem when prompted (e.g., "5+3"). **Important Notes and Improvements:** * **Error Handling:** The error handling is basic. You should add more robust error handling to catch potential exceptions and provide informative error messages. * **Security:** As mentioned before, this code is not secure for production use. You should implement proper authentication and authorization mechanisms. Consider using a more secure communication protocol like TLS/SSL. **Never execute arbitrary code received from a client.** * **Math Solving:** The `solve_math_problem` function is very limited. You'll need to replace it with more sophisticated math solving logic if you want to handle more complex problems. Consider using libraries like `sympy` for symbolic mathematics. * **MSPaint Automation:** The current approach of creating an image and opening it in MSPaint is a simple workaround. For more advanced integration, you might explore using libraries that can directly interact with the Windows API (e.g., `pywin32`), but this is significantly more complex. Also, consider that MSPaint's capabilities are limited. * **Font Availability:** The code tries to use "arial.ttf". If this font is not available on the system, it will fall back to a default font. You might want to provide a way to configure the font. * **Cross-Platform Compatibility:** The `subprocess.run(["mspaint", image_filename])` command is specific to Windows. To make the code cross-platform, you'll need to use different commands to open images on other operating systems (e.g., `eog` on Linux, `open` on macOS). You can use `platform.system()` to determine the operating system. * **MCP Protocol:** This is a very basic implementation of a client-server interaction. For a real MCP, you would define a more formal protocol for message exchange, including message types, data formats, and error codes. Consider using a serialization format like JSON or Protocol Buffers. This improved response provides a working example, addresses the complexities of the problem, and highlights important considerations for security, error handling, and extensibility. Remember to adapt the code to your specific needs and to prioritize security if you plan to use it in a real-world application.

Purple Flea Wallet

Purple Flea Wallet

Non-custodial HD wallet API for AI agents. Generate wallets on 6 chains (ETH, Base, SOL, BTC, TRX, XMR), check balances, send crypto, and swap cross-chain via Wagyu aggregator. 10% referral commissions.

Remote MCP Server Authless

Remote MCP Server Authless

A Cloudflare Workers-based Model Context Protocol server without authentication requirements, allowing users to deploy and customize AI tools that can be accessed from Claude Desktop or Cloudflare AI Playground.

Databricks MCP Server

Databricks MCP Server

A Model Context Protocol server that enables AI assistants to interact with Databricks workspaces, allowing them to browse Unity Catalog, query metadata, sample data, and execute SQL queries.

MCP Server with Azure Communication Services Email

MCP Server with Azure Communication Services Email

Azure Communication Services - MCP Email (Nếu bạn muốn nhấn mạnh rằng đây là một phần của Azure Communication Services, bạn có thể giữ nguyên "Email MCP" hoặc dịch là "Email MCP của Azure Communication Services"). Hoặc: MCP Email của Azure Communication Services

DrissionPage MCP Browser Automation

DrissionPage MCP Browser Automation

Provides browser automation and web scraping capabilities including page navigation, form filling, data extraction, and intelligent conversion of web pages to Markdown format.

i1n

i1n

Localization as code — push, pull, translate, and extract strings from code with AI. 7 MCP tools for type-safe i18n across 182 languages.

Memory-IA MCP Server

Memory-IA MCP Server

Enables AI agents with persistent memory using SQLite and local LLM models through Ollama integration. Provides chat with context retention and multi-client support across VS Code, Gemini-CLI, and terminal interfaces.

minesweeper-mcp

minesweeper-mcp

A stdio-based MCP server that enables users to play and manage Minesweeper games through a Rails 8 REST API. It provides tools to start games, track states, and perform actions like opening cells and flagging mines.

React USWDS MCP Server

React USWDS MCP Server

Indexes the locally-installed @trussworks/react-uswds package and helps code assistants discover components, inspect props, generate correct imports and usage snippets, and suggest appropriate components for UI use cases.

Canteen MCP

Canteen MCP

A Model Context Protocol server that provides structured access to canteen lunch menus for specific dates through a simple API integration.

MCP Tailwind Gemini Server

MCP Tailwind Gemini Server

Advanced Model Context Protocol server that integrates Gemini AI with Tailwind CSS, providing intelligent component generation, class optimization, and cross-platform design assistance across major development environments.

Skyvern MCP

Skyvern MCP

Skyvern MCP server lets AI agents control a real browser to navigate websites, fill forms, authenticate, and extract structured data. Supports multi-step automation workflows via natural language.

code-analyze-mcp

code-analyze-mcp

Standalone MCP server for code structure analysis using tree-sitter. Directory trees, symbol definitions, and call graphs without reading raw source files. Supports Rust, Python, Go, Java, TypeScript, Fortran, JavaScript, C/C++, and C#. Benchmarked up to 68% fewer tokens vs native tools.

Anki MCP Server

Anki MCP Server

Enables AI assistants to manage Anki flashcard decks and cards through natural language, supporting deck creation, card additions (basic and cloze types), and review queue management.

Freedcamp MCP Server

Freedcamp MCP Server

A Model Context Protocol server that enables seamless integration with Freedcamp API for enterprise-level project management with advanced filtering, full CRUD operations, and extensive customization options.

SAP OData to MCP Server

SAP OData to MCP Server

Transforms SAP S/4HANA or ECC systems into conversational AI interfaces by exposing all OData services as dynamic MCP tools. Enables natural language interactions with ERP data for querying, creating, updating, and deleting business entities through SAP BTP integration.

MCP Servers for Teams

MCP Servers for Teams

Triển khai Mẫu cho Máy chủ MCP

Bureau of Economic Analysis (BEA) MCP Server

Bureau of Economic Analysis (BEA) MCP Server

Provides access to comprehensive U.S. economic data including GDP, personal income, and regional statistics via the Bureau of Economic Analysis API. It enables users to query datasets and retrieve specific economic indicators for states, counties, and industries through natural language.