Discover Awesome MCP Servers

Extend your agent with 14,657 capabilities via MCP servers.

All14,657
UniAuto MCP Server

UniAuto MCP Server

Máy chủ Universal Test Automation MCP với khả năng tự phục hồi và tích hợp Smithery.ai.

Playwright Fetch MCP Server

Playwright Fetch MCP Server

Provides web content fetching capabilities using Playwright browser automation, enabling LLMs to retrieve and process JavaScript-rendered content from web pages and convert HTML to markdown for easier consumption.

openai-agents-chat-demo

openai-agents-chat-demo

Bản demo trò chuyện của các OpenAI Agents. Tích hợp LLM tùy chỉnh và máy chủ MCP cùng với công cụ chức năng.

MCP Interactive Service

MCP Interactive Service

An interface enabling high-frequency communication between AI tools (like Cursor and Windsurf) and users, allowing for option selection and information gathering through CLI, Web, or PyQt interfaces.

Goodreads MCP Server

Goodreads MCP Server

A CLI tool that integrates with Claude Desktop to allow users to retrieve their Goodreads library data through MCP.

Discourse MCP

Discourse MCP

Enables AI agents to interact with Discourse forums through search, reading topics/posts, browsing categories and tags, and creating posts. Supports both read-only public access and authenticated operations with configurable write permissions.

mcp-server-example

mcp-server-example

Garmin MCP Server

Garmin MCP Server

Enables ChatGPT to access and analyze personal Garmin health data including daily steps, heart rate, calories, sleep duration, and body battery levels. Collects data via webhook from Garmin devices and provides health insights through natural language queries.

YouTube MCP

YouTube MCP

A Model Context Protocol server that enables Claude to interact with YouTube data and functionality through the Claude Desktop application.

Remote MCP Server (Authless)

Remote MCP Server (Authless)

A deployable Model Context Protocol server on Cloudflare Workers that enables custom AI tools without requiring authentication, compatible with Cloudflare AI Playground and Claude Desktop.

GitHub MCP Server

GitHub MCP Server

Máy chủ MCP chính thức của GitHub

Kroger MCP Server

Kroger MCP Server

A FastMCP server that provides AI assistants like Claude with seamless access to Kroger's grocery shopping functionality through the Model Context Protocol, enabling store finding, product searching, and cart management.

Netlify Express MCP Server

Netlify Express MCP Server

A basic example MCP server deployed as a Netlify serverless function using Express. Demonstrates how to run Model Context Protocol servers in a serverless environment with proper routing configuration.

Microsoft Copilot Studio ❤️ MCP

Microsoft Copilot Studio ❤️ MCP

A lab demonstrating how to deploy a Model Context Protocol (MCP) server and integrate it with Microsoft Copilot Studio, allowing users to connect AI models to different data sources and tools through a standardized protocol.

MCP-Context-Provider

MCP-Context-Provider

A static MCP server that helps AI models maintain tool context across chat sessions, preventing loss of important information and keeping conversations smooth and uninterrupted.

mcp-angular-cli

mcp-angular-cli

mcp-angular-cli

MCP Kali Server

MCP Kali Server

Okay, here's a breakdown of how to configure an MCP (presumably referring to a **Management and Configuration Protocol**, though the exact meaning depends on the specific AI agent and its ecosystem) to connect an AI agent to a Linux machine. I'll cover the general concepts and common approaches, but you'll need to adapt this to the specific requirements of your AI agent and MCP implementation. **Understanding the Goal** The core idea is to enable your AI agent to: 1. **Execute commands** on the Linux machine. 2. **Retrieve information** from the Linux machine (e.g., system status, logs, file contents). 3. **Potentially configure** the Linux machine (e.g., change settings, install software). **General Steps and Considerations** 1. **Choose an MCP (Management and Configuration Protocol):** * **SSH (Secure Shell):** A very common and secure option. It allows you to execute commands remotely. You'll need an SSH client on the AI agent's side and an SSH server (usually `sshd`) running on the Linux machine. * **REST API:** You can create a REST API on the Linux machine that exposes specific functions for the AI agent to call. This requires more development effort but offers fine-grained control. Frameworks like Flask (Python) or Node.js (JavaScript) are often used. * **gRPC:** A high-performance, open-source RPC framework. Good for complex interactions and structured data. * **Message Queues (e.g., RabbitMQ, Kafka):** The AI agent can send commands as messages to a queue, and a process on the Linux machine can consume those messages and execute the commands. Useful for asynchronous communication. * **Custom Protocol:** If none of the above fit your needs, you can design your own protocol. This is the most complex option. * **Existing Management Tools:** Consider leveraging existing management tools like Ansible, Chef, or Puppet if they are already in use in your environment. The AI agent could interact with these tools' APIs. 2. **Security:** This is paramount. * **Authentication:** How will the AI agent prove its identity to the Linux machine? * **SSH Keys:** The most secure method for SSH. Generate an SSH key pair, place the public key on the Linux machine in the `~/.ssh/authorized_keys` file for the appropriate user, and configure the AI agent to use the private key. **Never** hardcode passwords in your AI agent's code. * **API Keys:** For REST APIs, use API keys that are securely stored and transmitted (e.g., using HTTPS). * **Certificates:** For gRPC, use TLS/SSL certificates for authentication and encryption. * **Username/Password (Discouraged):** Avoid this if possible, especially for SSH. If you must use it, ensure the password is very strong and stored securely (e.g., using a secrets management system). * **Authorization:** What permissions does the AI agent have on the Linux machine? * **Principle of Least Privilege:** Grant the AI agent only the minimum necessary permissions to perform its tasks. Create a dedicated user account for the AI agent with limited privileges. * **sudo:** Use `sudo` to allow the AI agent to execute specific commands as root, but only those commands that are absolutely necessary. Configure `sudoers` carefully. * **API Access Control:** For REST APIs, implement access control mechanisms to restrict which endpoints the AI agent can access. * **Encryption:** Use encryption (e.g., HTTPS for REST APIs, SSH for command execution) to protect data in transit. * **Firewall:** Configure the Linux machine's firewall (e.g., `iptables`, `ufw`) to allow only necessary connections from the AI agent's IP address or network. * **Auditing:** Log all actions performed by the AI agent on the Linux machine. This is crucial for security monitoring and troubleshooting. 3. **Configuration on the Linux Machine:** * **Install the necessary software:** This might include an SSH server, a web server (for REST APIs), gRPC libraries, message queue clients, etc. * **Configure the firewall:** Allow connections from the AI agent's IP address on the appropriate port (e.g., port 22 for SSH, port 443 for HTTPS). * **Create a user account (recommended):** Create a dedicated user account for the AI agent with limited privileges. * **Set up SSH keys (if using SSH):** Add the AI agent's public key to the `~/.ssh/authorized_keys` file for the user account. * **Configure `sudoers` (if needed):** If the AI agent needs to execute commands as root, configure the `sudoers` file to allow it to execute only those specific commands. Use `visudo` to edit the `sudoers` file safely. 4. **Configuration on the AI Agent Side:** * **Install the necessary libraries:** This might include an SSH client library, an HTTP client library, gRPC libraries, message queue clients, etc. * **Configure the connection parameters:** This includes the Linux machine's IP address or hostname, the port number, the username, the password or SSH key, and any other necessary parameters. **Store these parameters securely.** Avoid hardcoding them in the code. Use environment variables, configuration files, or a secrets management system. * **Implement the communication logic:** Write the code that sends commands to the Linux machine and receives responses. **Example: Using SSH** This is a common and relatively simple approach. * **Linux Machine:** 1. **Install SSH server:** `sudo apt-get update && sudo apt-get install openssh-server` (on Debian/Ubuntu) 2. **Create a user (optional but recommended):** `sudo adduser aiagent` 3. **Generate SSH key pair on the AI agent's machine:** `ssh-keygen -t rsa -b 4096` (Follow the prompts. Choose a strong passphrase.) 4. **Copy the public key to the Linux machine:** There are several ways to do this. One easy way is using `ssh-copy-id`: ```bash ssh-copy-id aiagent@<linux_machine_ip_address> ``` (You'll be prompted for the `aiagent` user's password on the Linux machine the first time.) 5. **Test the connection:** `ssh aiagent@<linux_machine_ip_address>` (You should be able to log in without a password if the SSH key is configured correctly.) 6. **Configure `sudoers` (if needed):** Use `sudo visudo` to edit the `sudoers` file. For example, to allow the `aiagent` user to execute `/sbin/reboot` without a password: ``` aiagent ALL=(ALL) NOPASSWD: /sbin/reboot ``` **Be very careful when editing the `sudoers` file.** Incorrect configuration can compromise the security of your system. * **AI Agent (Python example using `paramiko`):** ```python import paramiko def execute_command(hostname, username, private_key_path, command): """Executes a command on a remote Linux machine using SSH.""" try: ssh_client = paramiko.SSHClient() ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # WARNING: Insecure for production! private_key = paramiko.RSAKey.from_private_key_file(private_key_path) ssh_client.connect(hostname=hostname, username=username, pkey=private_key) stdin, stdout, stderr = ssh_client.exec_command(command) output = stdout.read().decode('utf-8') error = stderr.read().decode('utf-8') ssh_client.close() return output, error except Exception as e: return None, str(e) if __name__ == '__main__': hostname = "your_linux_machine_ip" username = "aiagent" private_key_path = "/path/to/your/private/key" # Replace with the actual path command = "uname -a" # Example command output, error = execute_command(hostname, username, private_key_path, command) if output: print("Output:\n", output) if error: print("Error:\n", error) ``` **Important Notes about the SSH Example:** * **`paramiko.AutoAddPolicy()` is insecure for production.** It automatically adds the server's host key to the `known_hosts` file. In a production environment, you should verify the server's host key manually. * **Store the `private_key_path` securely.** Do not hardcode it in the code. Use environment variables or a secrets management system. * **Error Handling:** The example includes basic error handling, but you should implement more robust error handling in a production environment. * **Security Best Practices:** Always follow security best practices when working with SSH keys. Protect your private key. **Example: Using a REST API** * **Linux Machine (Python/Flask example):** ```python from flask import Flask, request, jsonify import subprocess import os app = Flask(__name__) API_KEY = os.environ.get("AI_AGENT_API_KEY") # Get API key from environment variable def check_api_key(api_key): return api_key == API_KEY @app.route('/execute', methods=['POST']) def execute_command(): api_key = request.headers.get('X-API-Key') if not check_api_key(api_key): return jsonify({'error': 'Invalid API key'}), 401 data = request.get_json() command = data.get('command') if not command: return jsonify({'error': 'Command is required'}), 400 try: result = subprocess.run(command, shell=True, capture_output=True, text=True, timeout=10) # Add timeout return jsonify({ 'output': result.stdout, 'error': result.stderr, 'returncode': result.returncode }) except subprocess.TimeoutExpired: return jsonify({'error': 'Command timed out'}), 504 except Exception as e: return jsonify({'error': str(e)}), 500 if __name__ == '__main__': app.run(debug=False, host='0.0.0.0', port=5000) ``` * **Install Flask:** `pip install Flask` * **Set the API key:** `export AI_AGENT_API_KEY="your_secure_api_key"` (or set it in your system's environment variables) * **Run the Flask app:** `python your_flask_app.py` * **AI Agent (Python example using `requests`):** ```python import requests import os def execute_remote_command(hostname, command): api_key = os.environ.get("AI_AGENT_API_KEY") if not api_key: print("Error: AI_AGENT_API_KEY environment variable not set.") return None, None url = f"http://{hostname}:5000/execute" # Replace with your API endpoint headers = {'Content-Type': 'application/json', 'X-API-Key': api_key} data = {'command': command} try: response = requests.post(url, headers=headers, json=data, timeout=5) # Add timeout response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) result = response.json() return result.get('output'), result.get('error') except requests.exceptions.RequestException as e: print(f"Request error: {e}") return None, str(e) if __name__ == '__main__': hostname = "your_linux_machine_ip" command = "ls -l /tmp" output, error = execute_remote_command(hostname, command) if output: print("Output:\n", output) if error: print("Error:\n", error) ``` * **Install `requests`:** `pip install requests` * **Set the API key:** `export AI_AGENT_API_KEY="your_secure_api_key"` (same as on the Linux machine) **Important Notes about the REST API Example:** * **HTTPS:** **Crucially, use HTTPS in a production environment.** This requires setting up SSL/TLS certificates for your Flask app. Let's Encrypt is a good option for free certificates. * **API Key Security:** Store the API key securely on both the AI agent and the Linux machine. Use environment variables or a secrets management system. * **Input Validation:** The Flask app should validate the `command` input to prevent command injection vulnerabilities. **Do not allow arbitrary commands to be executed.** Whitelist specific commands or use a safe command execution library. * **Error Handling:** The example includes basic error handling, but you should implement more robust error handling in a production environment. * **Timeouts:** The examples include timeouts to prevent commands from running indefinitely. Adjust the timeouts as needed. * **Rate Limiting:** Implement rate limiting on the API endpoint to prevent abuse. * **Logging:** Log all API requests and responses for auditing and troubleshooting. **Vietnamese Translation of Key Concepts** Here's a translation of some of the key concepts into Vietnamese: * **MCP (Management and Configuration Protocol):** Giao thức Quản lý và Cấu hình * **AI Agent:** Tác nhân AI * **Linux Machine:** Máy Linux * **SSH (Secure Shell):** Vỏ bảo mật (SSH) * **REST API:** Giao diện lập trình ứng dụng REST (REST API) * **Authentication:** Xác thực * **Authorization:** Ủy quyền * **Encryption:** Mã hóa * **Firewall:** Tường lửa * **SSH Key:** Khóa SSH * **API Key:** Khóa API * **Username:** Tên người dùng * **Password:** Mật khẩu * **Principle of Least Privilege:** Nguyên tắc đặc quyền tối thiểu * **sudo:** Lệnh sudo (cho phép thực thi lệnh với quyền root) * **Environment Variable:** Biến môi trường * **Secrets Management System:** Hệ thống quản lý bí mật * **Command Injection:** Tấn công chèn lệnh **Choosing the Right Approach** The best approach depends on your specific requirements: * **Simplicity:** SSH is often the simplest option for basic command execution. * **Fine-grained Control:** REST APIs offer the most fine-grained control over what the AI agent can do. * **Performance:** gRPC is a good choice for high-performance communication. * **Asynchronous Communication:** Message queues are useful for asynchronous communication. * **Security:** Security should be a primary consideration in all cases. **Important Considerations for Production Environments** * **Secrets Management:** Use a secrets management system (e.g., HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) to store sensitive information such as passwords, API keys, and SSH private keys. * **Monitoring and Logging:** Implement comprehensive monitoring and logging to track the AI agent's activity and identify potential security issues. * **Regular Security Audits:** Conduct regular security audits to identify and address vulnerabilities. * **Least Privilege:** Always adhere to the principle of least privilege. * **Input Validation:** Thoroughly validate all input from the AI agent to prevent command injection and other vulnerabilities. * **Rate Limiting:** Implement rate limiting to prevent abuse. * **Timeouts:** Use timeouts to prevent commands from running indefinitely. * **Update Regularly:** Keep all software up to date with the latest security patches. Remember to adapt these guidelines to your specific AI agent and MCP implementation. Good luck!

GitHub DeepWiki MCP Server

GitHub DeepWiki MCP Server

This MCP server provides detailed GitHub repository information by integrating with mcp.deepwiki.com and GitHub API to fetch repository wiki content, README files, and enable intelligent question-answering about repositories.

Gemini MCP Server

Gemini MCP Server

A Model Context Protocol server that gives Claude access to Google's Gemini 2.5 Pro for extended thinking, code analysis, and problem-solving with a massive context window.

MCPJsServerSample

MCPJsServerSample

MCP Node-Notifier Server

MCP Node-Notifier Server

A Model Context Protocol server that provides system notification capabilities across various platforms (macOS, Windows, Linux) using node-notifier.

Nexus MCP Server

Nexus MCP Server

A production-ready Model Context Protocol server that brings AI-powered web search directly into development environments, providing intelligent search results with proper citations in Claude Desktop, Cursor, or any MCP-compatible client.

mcp-server

mcp-server

Nodit Mcp Server Poc

Nodit Mcp Server Poc

Proof of Concept (POC) Máy chủ MCP Nodit API

Portainer MCP

Portainer MCP

Máy chủ Portainer MCP

Stock Ticker MCP Server

Stock Ticker MCP Server

Provides real-time and historical stock market data through the Alpha Vantage API. Enables users to get current stock prices and retrieve historical stock data for any major stock symbol via natural language queries.

Bun SSE Transport for MCP

Bun SSE Transport for MCP

Cho phép giao tiếp thời gian thực giữa máy khách và máy chủ bằng cách sử dụng Server-Sent Events (SSE) cho Giao thức Ngữ cảnh Mô hình, được xây dựng đặc biệt cho môi trường thời gian chạy JavaScript Bun.

Morelogin Mcp

Morelogin Mcp

UnifAI MCP Server

UnifAI MCP Server

Salesforce MCP Server

Salesforce MCP Server

Enables authenticated interaction with Salesforce through OAuth Bearer token forwarding. Allows users to make API calls to Salesforce instances while maintaining secure session-based authentication throughout the MCP lifecycle.