Discover Awesome MCP Servers
Extend your agent with 23,710 capabilities via MCP servers.
- All23,710
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
PlainlyVideosMCP
MCP server for Plainly Videos that allows browsing designs and projects, as well as rendering videos.
MCP Performance Analyzer
Monitors and analyzes mobile application performance data to detect severe issues such as excessive memory usage and view count growth. Provides intelligent analysis with customizable rules and integrates seamlessly with development workflows.
Thingiverse MCP Server
AIアシスタントがThingiverseから3Dプリント可能なモデルを検索、探索、取得できるようにする、モデルコンテキストプロトコルサーバー。
UAAR University MCP Server
Provides structured access to PMAS Arid Agriculture University Rawalpindi's academic resources, admissions, and student services through 53 specialized tools. AI agents can manage course information, library services, hostel details, and administrative functions using secure stdio or HTTP transports.
stats-compass-mcp
Stats Compass provides various analysis and modelling tools for AI-automated data science workflows
CloudLab MCP Server
Enables AI assistants to manage CloudLab/Emulab experiments, including listing experiments, controlling nodes (reboot, reload, power cycle), retrieving console logs, extending experiment time, and terminating experiments.
Azure Table MCP Server by CData
Azure Table MCP Server by CData
Apache Hbase MCP Server by CData
Apache Hbase MCP Server by CData
Fastly
Fastly
CockroachDB MCP Server
Enables AI assistants to query and interact with CockroachDB clusters through natural language, supporting schema discovery, CRUD operations, transactions, cluster monitoring, and data export with configurable safety controls.
TimeMCP
Go言語で書かれた、modelcontextprotocol/time MCPサーバーを置き換えるためのMCPサーバー
LinkedIn MCP Server by CData
LinkedIn MCP Server by CData
Kibela MCP Server
A Model Context Protocol server that allows AI applications to interact with Kibela knowledge bases, enabling users to search, create, update, and organize content through natural language.
Remote MCP Server
A Cloudflare-deployable server that implements Model Context Protocol (MCP) capabilities, allowing AI assistants like Claude to access custom tools via OAuth authentication flows.
SVG Maker MCP Server
Enables creation, validation, rendering, and optimization of SVG images with conversion capabilities to PNG, React components, React Native components, and Data URIs.
Lunar Calendar MCP Server
Provides Chinese traditional calendar functions including BaZi calculation, solar-lunar calendar conversion, Huangli almanac queries, daily fortune readings, 24 solar terms, and Wu Xing (Five Elements) analysis.
Reservation System MCP Server
An MCP server that allows AI models to securely access and manage WeChat Cloud-based reservation data. It provides tools for querying records, updating appointment statuses, and deleting entries through the WeChat Cloud Development API.
MCP Boilerplate
A modern, lightning-fast starter template for building Model Context Protocol applications with Bun, enabling developers to create MCP servers with TypeScript support, validation, and environment configuration.
BBOT MCP Server
Enables users to run and manage BBOT security scans through the MCP interface. Provides comprehensive tools for executing reconnaissance scans, monitoring progress, and retrieving results with support for concurrent scanning operations.
simplifier-mcp
An MCP (Model Context Protocol) server that enables integration with the Simplifier Low Code Platform. This server provides tools and capabilities for creating and managing Simplifier Connectors and BusinessObjects through the platform's REST API.
Slack MCP Server
A comprehensive Slack integration server that enables sending messages, managing channels, uploading files, and running Pomodoro timers through FastMCP v2.
GHAS MCP server (GitHub Advanced Security)
このサーバーはGitHub Advanced Securityと連携し、セキュリティアラートをロードして、あなたのコンテキストに取り込みます。Dependabotセキュリティアラート、シークレットスキャンアラート、コードセキュリティアラートをサポートしています。
Example MCP Server with FastMCP
An educational example demonstrating how to build MCP servers in Python using FastMCP, showing how to expose tools, resources, and prompts to AI clients.
Clay
A Model Context Protocol (MCP) server for Clay (https://clay.earth). Search your email, calendar, Twitter / X, Linkedin, iMessage, Facebook, and WhatsApp contacts. Take notes, set reminders, and more.
Wolfram Alpha
Connecting a chat repl to Wolfram Alpha's computational intelligence typically involves using Wolfram Alpha's API. Here's a general outline of the steps and considerations, along with example code snippets (using Python as the example language, since it's commonly used with repl.it): **1. Get a Wolfram Alpha API Key:** * Go to the Wolfram Alpha Developer Portal: [https://developer.wolframalpha.com/](https://developer.wolframalpha.com/) * Create an account (if you don't have one). * Create a new App. This will give you an App ID, which is your API key. Keep this key secret! **2. Choose a Programming Language and Libraries:** * **Python:** A popular choice. You'll likely use the `requests` library to make HTTP requests to the Wolfram Alpha API. You might also use a library like `xmltodict` to parse the XML response from Wolfram Alpha. **3. Set up your repl.it Environment:** * Create a new repl.it project (e.g., Python). * Install the necessary libraries. You can do this by adding them to the `pyproject.toml` file (if using Poetry) or by using the repl.it package manager. For example, add these to your `pyproject.toml` file: ```toml [tool.poetry.dependencies] python = "^3.8" requests = "^2.28.1" xmltodict = "^0.13.0" ``` * Then, run `poetry install` in the repl.it shell. **4. Write the Code:** Here's a basic Python example: ```python import requests import xmltodict import os # Replace with your actual Wolfram Alpha App ID (API Key) WOLFRAM_ALPHA_APP_ID = os.environ.get("WOLFRAM_ALPHA_APP_ID") # Get from environment variable def query_wolfram_alpha(query): """ Queries the Wolfram Alpha API and returns the result. """ if not WOLFRAM_ALPHA_APP_ID: return "Error: Wolfram Alpha App ID not set. Please set the WOLFRAM_ALPHA_APP_ID environment variable." base_url = "http://api.wolframalpha.com/v2/query" params = { "input": query, "appid": WOLFRAM_ALPHA_APP_ID, "output": "XML" # Request XML output for easier parsing } try: response = requests.get(base_url, params=params) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) xml_data = response.text data = xmltodict.parse(xml_data) # Extract relevant information from the XML results = [] if 'queryresult' in data and data['queryresult']['@success'] == 'true': pods = data['queryresult']['pod'] for pod in pods: if pod['@title'] != 'Input interpretation': # Skip input interpretation if isinstance(pod['subpod'], list): for subpod in pod['subpod']: if 'img' in subpod and '@src' in subpod['img']: results.append(f"{pod['@title']}: {subpod['img']['@alt']}") elif 'plaintext' in subpod: results.append(f"{pod['@title']}: {subpod['plaintext']}") else: subpod = pod['subpod'] if 'img' in subpod and '@src' in subpod['img']: results.append(f"{pod['@title']}: {subpod['img']['@alt']}") elif 'plaintext' in subpod: results.append(f"{pod['@title']}: {subpod['plaintext']}") if not results: return "Wolfram Alpha couldn't find a relevant answer." else: return "\n".join(results) else: return "Wolfram Alpha couldn't understand the query." except requests.exceptions.RequestException as e: return f"Error: Network error - {e}" except Exception as e: return f"Error: An unexpected error occurred - {e}" # Example usage (replace with your chat input) if __name__ == "__main__": user_query = input("Enter your query for Wolfram Alpha: ") result = query_wolfram_alpha(user_query) print(result) ``` **Key improvements and explanations:** * **Environment Variables:** The code now uses `os.environ.get("WOLFRAM_ALPHA_APP_ID")` to retrieve the API key from an environment variable. **This is crucial for security.** Never hardcode your API key directly into your code. In repl.it, you can set environment variables in the "Secrets" tab (the lock icon in the left sidebar). Set a secret named `WOLFRAM_ALPHA_APP_ID` and paste your API key as the value. * **Error Handling:** Includes `try...except` blocks to handle potential errors like network issues (`requests.exceptions.RequestException`) and other unexpected exceptions. This makes the code more robust. The `response.raise_for_status()` line is important; it will raise an HTTPError if the API returns a 4xx or 5xx status code (indicating an error). * **XML Parsing:** Uses `xmltodict` to parse the XML response from Wolfram Alpha into a Python dictionary. This makes it much easier to access the data. * **XML Data Extraction:** The code now iterates through the `pod` elements in the XML response and extracts the relevant information (plaintext or image URLs) from the `subpod` elements. It handles cases where `subpod` is a list or a single dictionary. It also skips the "Input interpretation" pod, which is usually not what you want to display. * **Clearer Output:** Formats the output to be more readable, including the title of each pod. * **No Hardcoded API Key:** The API key is *never* stored directly in the code. * **Handles No Results:** The code now checks if Wolfram Alpha couldn't understand the query or couldn't find a relevant answer and returns an appropriate message. * **More Robust XML Parsing:** The code now checks if the `queryresult` contains the `@success` attribute and if it's set to `true` before attempting to parse the pods. This prevents errors if the query fails. * **Clearer Error Messages:** The error messages are now more informative, indicating the type of error that occurred. **5. Integrate with your Chat Repl:** * This example provides the core functionality. You'll need to integrate it with your specific chat repl setup. This will involve: * Receiving user input from your chat interface. * Passing the user input to the `query_wolfram_alpha` function. * Displaying the result from `query_wolfram_alpha` back to the user in your chat interface. **Example of integrating with a simple chat loop:** ```python # (Previous code from above goes here) if __name__ == "__main__": print("Welcome to the Wolfram Alpha Chat!") while True: user_query = input("You: ") if user_query.lower() == "exit": break result = query_wolfram_alpha(user_query) print("Wolfram Alpha:", result) print("Goodbye!") ``` **Important Considerations:** * **API Usage Limits:** Wolfram Alpha's API has usage limits. Be mindful of these limits to avoid being blocked. Check the Wolfram Alpha Developer Portal for details on the limits for your API key type. * **Error Handling:** Implement robust error handling to gracefully handle network errors, API errors, and invalid user input. * **Security:** **Never** hardcode your API key directly into your code. Use environment variables or a secure configuration file. * **Rate Limiting:** Consider implementing rate limiting on your side to prevent users from overwhelming the Wolfram Alpha API. * **Asynchronous Operations:** For more complex chat applications, consider using asynchronous operations (e.g., `asyncio` in Python) to avoid blocking the main thread while waiting for the Wolfram Alpha API to respond. * **API Response Format:** The example uses XML output. You can also request JSON output, which might be easier to parse in some cases. Change the `output` parameter in the `params` dictionary to `"JSON"` if you want JSON output. You'll need to adjust the parsing logic accordingly. * **Wolfram Language:** For very complex tasks, you might consider using the Wolfram Language directly (if you have a Wolfram Engine license). This gives you more control over the computation. This comprehensive guide should help you connect your chat repl to Wolfram Alpha. Remember to replace the placeholder API key with your actual key and adapt the code to fit your specific chat application. Good luck!
Clean-Cut-MCP
Enables users to create professional React-powered videos through Claude Desktop using natural language and the Remotion framework. It provides a persistent studio environment for generating animations, managing video assets, and rendering high-quality content.
PostgreSQL MCP Server
Enables natural language interaction with PostgreSQL databases through Amazon Q for querying data, listing tables, and describing schemas. It provides secure, read-only access with automatic row limits to ensure efficient database exploration.
Obsidian iCloud MCP
iCloud Driveに保存されたObsidian vaultをModel Context Protocol経由でAIモデルに接続し、AIアシスタントがObsidianノートにアクセスして操作できるようにします。
MCP Linux Deployment
Enables management of Windows servers from Linux through an MCP server with per-user installation. Provides tools to control Windows systems via API with secure credential management.
Excel MCP Server
鏡 (Kagami)