Discover Awesome MCP Servers

Extend your agent with 15,933 capabilities via MCP servers.

All15,933
Browser Monitoring MCP Server

Browser Monitoring MCP Server

LLMに、ウェブページの挙動に関する構造化されたデータを提供するために、コンソールログやネットワークリクエストを監視・取得するブラウザを開きます。

Wikidata MCP Server

Wikidata MCP Server

Model Context Protocol を使用して Wikidata API とやり取りするためのサーバー実装。識別子の検索、メタデータの抽出、SPARQL クエリの実行などのツールを提供します。

MCP Vault

MCP Vault

Enables AI assistants to interact with Obsidian vaults through file operations like moving/renaming files and analyzing markdown heading structures. Bridges AI assistants with Obsidian using the Local REST API plugin for seamless vault management.

JIRA MCP Server

JIRA MCP Server

Provides tools for AI assistants to interact with JIRA APIs, enabling them to read, create, update, and manage JIRA issues through standardized MCP tools.

jcrawl4ai-mcp-server

jcrawl4ai-mcp-server

jcrawl4ai-mcp-server

GoScry

GoScry

GoScryはGoで書かれたサーバーアプリケーションで、制御システム(LLMやスクリプトなど)とウェブブラウザの橋渡し役として機能します。

MCP Server Demo

MCP Server Demo

A WebSocket-based Model Control Protocol (MCP) server that processes model requests and provides responses. Supports chat and text completion actions with a standardized JSON protocol for AI model communication.

BatchIt

BatchIt

複数のMCPツール呼び出しを1つのリクエストにまとめて、AIエージェントのトークン使用量とネットワークのオーバーヘッドを削減する、シンプルなアグリゲーターサーバー。

mcp-pandoc-ts: A Document Conversion MCP Server (TypeScript/Host Service Version)

mcp-pandoc-ts: A Document Conversion MCP Server (TypeScript/Host Service Version)

ホスト上の pandoc をローカルの pandoc ホストサービスを使って、Docker 環境から制御できる MCP-Server

MCP Log Reader

MCP Log Reader

複数のプラットフォームにまたがるログファイルにClaudeが直接アクセスできるようにすることで、Model Context Protocolのログを分析およびデバッグするのに役立つ、特殊なMCPサーバー。

Thingiverse MCP Server

Thingiverse MCP Server

AIアシスタントがThingiverseから3Dプリント可能なモデルを検索、探索、取得できるようにする、モデルコンテキストプロトコルサーバー。

LinkedIn MCP Server by CData

LinkedIn MCP Server by CData

LinkedIn MCP Server by CData

MCP-Censys

MCP-Censys

ドメイン、IP、FQDNの偵察のためにCensys Search APIに対する自然言語クエリを可能にするModel Context Protocolサーバー。ホスト、DNS、証明書、およびサービスに関する情報をリアルタイムで提供します。

Kibela MCP Server

Kibela MCP Server

A Model Context Protocol server that allows AI applications to interact with Kibela knowledge bases, enabling users to search, create, update, and organize content through natural language.

SeekChat

SeekChat

SeekChatはMCPツールの実行をサポートしており、AIが直接コンピューターを制御して様々なタスクを実行できます。ファイル管理、データ分析、コード開発などを簡単に自動化し、AIを真にインテリジェントなアシスタントに変えます。

Joplin MCP Server

Joplin MCP Server

AIアシスタントが、Model Context Protocolを通じてJoplinノートにアクセスし操作できるようにするサーバー。自然言語を使ってノートの検索、読み込み、編集、管理が可能になります。

TianGong-LCA-MCP Server

TianGong-LCA-MCP Server

TianGongライフサイクルアセスメント(LCA)アプリケーション向けの、STDIOとSSE両方のプロトコルをサポートするモデルコンテキストプロトコルサーバー。

Ekispert MCP Remote Server

Ekispert MCP Remote Server

Enables route search and PDF generation using Japan's Ekispert API through a remote MCP server. Deployed on Cloudflare Workers with secure OAuth authentication and persistent PDF storage via R2.

PlainlyVideosMCP

PlainlyVideosMCP

MCP server for Plainly Videos that allows browsing designs and projects, as well as rendering videos.

Academic Author Network MCP Server

Academic Author Network MCP Server

Enables analysis of academic author networks and research collaborations by retrieving co-authors and research keywords from sources like Semantic Scholar, OpenAlex, Crossref, and Google Scholar.

Remote MCP Server

Remote MCP Server

A Cloudflare-deployable server that implements Model Context Protocol (MCP) capabilities, allowing AI assistants like Claude to access custom tools via OAuth authentication flows.

Kong Konnect MCP Server

Kong Konnect MCP Server

AIアシスタントがKong KonnectのAPIゲートウェイと連携できるようにするモデルコンテキストプロトコルサーバー。自然言語を通じて、分析データのクエリ、構成の検査、コントロールプレーンの管理を行うためのツールを提供する。

Buildkite MCP Server

Buildkite MCP Server

Buildkite連携のためのモデルコンテキストプロトコル (MCP) サーバー

FastAPI-MCP

FastAPI-MCP

設定不要で、FastAPIのエンドポイントをModel Context Protocol (MCP) ツールとして自動的に公開するツールです。ClaudeのようなLLMシステムが、追加のコーディングなしにあなたのAPIと連携できるようになります。

Terminal.shop MCP Server

Terminal.shop MCP Server

AIアシスタントがTerminal.shopとシームレスに連携し、Terminal.shopのAPIを通じて、製品の閲覧、ショッピングカートの管理、注文、サブスクリプションの処理を行えるようにします。

Part 1. Real-Time LangGraph Agent with MCP Tool Execution

Part 1. Real-Time LangGraph Agent with MCP Tool Execution

このプロジェクトは、LangGraphエージェントを、カスタムMCP(Modular Command Protocol)サーバーによって提供されるリモートツールに接続する、疎結合なリアルタイムエージェントアーキテクチャを実証します。このアーキテクチャにより、各ツールを独立して(SSEまたはSTDIO経由で)ホストできる、柔軟でスケーラブルなマルチエージェントシステムが可能になり、モジュール性とクラウドデプロイ可能な実行ファイルを提供します。

Slack MCP Server

Slack MCP Server

A comprehensive Slack integration server that enables sending messages, managing channels, uploading files, and running Pomodoro timers through FastMCP v2.

GHAS MCP server (GitHub Advanced Security)

GHAS MCP server (GitHub Advanced Security)

このサーバーはGitHub Advanced Securityと連携し、セキュリティアラートをロードして、あなたのコンテキストに取り込みます。Dependabotセキュリティアラート、シークレットスキャンアラート、コードセキュリティアラートをサポートしています。

Clay

Clay

A Model Context Protocol (MCP) server for Clay (https://clay.earth). Search your email, calendar, Twitter / X, Linkedin, iMessage, Facebook, and WhatsApp contacts. Take notes, set reminders, and more.

Wolfram Alpha

Wolfram Alpha

Connecting a chat repl to Wolfram Alpha's computational intelligence typically involves using Wolfram Alpha's API. Here's a general outline of the steps and considerations, along with example code snippets (using Python as the example language, since it's commonly used with repl.it): **1. Get a Wolfram Alpha API Key:** * Go to the Wolfram Alpha Developer Portal: [https://developer.wolframalpha.com/](https://developer.wolframalpha.com/) * Create an account (if you don't have one). * Create a new App. This will give you an App ID, which is your API key. Keep this key secret! **2. Choose a Programming Language and Libraries:** * **Python:** A popular choice. You'll likely use the `requests` library to make HTTP requests to the Wolfram Alpha API. You might also use a library like `xmltodict` to parse the XML response from Wolfram Alpha. **3. Set up your repl.it Environment:** * Create a new repl.it project (e.g., Python). * Install the necessary libraries. You can do this by adding them to the `pyproject.toml` file (if using Poetry) or by using the repl.it package manager. For example, add these to your `pyproject.toml` file: ```toml [tool.poetry.dependencies] python = "^3.8" requests = "^2.28.1" xmltodict = "^0.13.0" ``` * Then, run `poetry install` in the repl.it shell. **4. Write the Code:** Here's a basic Python example: ```python import requests import xmltodict import os # Replace with your actual Wolfram Alpha App ID (API Key) WOLFRAM_ALPHA_APP_ID = os.environ.get("WOLFRAM_ALPHA_APP_ID") # Get from environment variable def query_wolfram_alpha(query): """ Queries the Wolfram Alpha API and returns the result. """ if not WOLFRAM_ALPHA_APP_ID: return "Error: Wolfram Alpha App ID not set. Please set the WOLFRAM_ALPHA_APP_ID environment variable." base_url = "http://api.wolframalpha.com/v2/query" params = { "input": query, "appid": WOLFRAM_ALPHA_APP_ID, "output": "XML" # Request XML output for easier parsing } try: response = requests.get(base_url, params=params) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) xml_data = response.text data = xmltodict.parse(xml_data) # Extract relevant information from the XML results = [] if 'queryresult' in data and data['queryresult']['@success'] == 'true': pods = data['queryresult']['pod'] for pod in pods: if pod['@title'] != 'Input interpretation': # Skip input interpretation if isinstance(pod['subpod'], list): for subpod in pod['subpod']: if 'img' in subpod and '@src' in subpod['img']: results.append(f"{pod['@title']}: {subpod['img']['@alt']}") elif 'plaintext' in subpod: results.append(f"{pod['@title']}: {subpod['plaintext']}") else: subpod = pod['subpod'] if 'img' in subpod and '@src' in subpod['img']: results.append(f"{pod['@title']}: {subpod['img']['@alt']}") elif 'plaintext' in subpod: results.append(f"{pod['@title']}: {subpod['plaintext']}") if not results: return "Wolfram Alpha couldn't find a relevant answer." else: return "\n".join(results) else: return "Wolfram Alpha couldn't understand the query." except requests.exceptions.RequestException as e: return f"Error: Network error - {e}" except Exception as e: return f"Error: An unexpected error occurred - {e}" # Example usage (replace with your chat input) if __name__ == "__main__": user_query = input("Enter your query for Wolfram Alpha: ") result = query_wolfram_alpha(user_query) print(result) ``` **Key improvements and explanations:** * **Environment Variables:** The code now uses `os.environ.get("WOLFRAM_ALPHA_APP_ID")` to retrieve the API key from an environment variable. **This is crucial for security.** Never hardcode your API key directly into your code. In repl.it, you can set environment variables in the "Secrets" tab (the lock icon in the left sidebar). Set a secret named `WOLFRAM_ALPHA_APP_ID` and paste your API key as the value. * **Error Handling:** Includes `try...except` blocks to handle potential errors like network issues (`requests.exceptions.RequestException`) and other unexpected exceptions. This makes the code more robust. The `response.raise_for_status()` line is important; it will raise an HTTPError if the API returns a 4xx or 5xx status code (indicating an error). * **XML Parsing:** Uses `xmltodict` to parse the XML response from Wolfram Alpha into a Python dictionary. This makes it much easier to access the data. * **XML Data Extraction:** The code now iterates through the `pod` elements in the XML response and extracts the relevant information (plaintext or image URLs) from the `subpod` elements. It handles cases where `subpod` is a list or a single dictionary. It also skips the "Input interpretation" pod, which is usually not what you want to display. * **Clearer Output:** Formats the output to be more readable, including the title of each pod. * **No Hardcoded API Key:** The API key is *never* stored directly in the code. * **Handles No Results:** The code now checks if Wolfram Alpha couldn't understand the query or couldn't find a relevant answer and returns an appropriate message. * **More Robust XML Parsing:** The code now checks if the `queryresult` contains the `@success` attribute and if it's set to `true` before attempting to parse the pods. This prevents errors if the query fails. * **Clearer Error Messages:** The error messages are now more informative, indicating the type of error that occurred. **5. Integrate with your Chat Repl:** * This example provides the core functionality. You'll need to integrate it with your specific chat repl setup. This will involve: * Receiving user input from your chat interface. * Passing the user input to the `query_wolfram_alpha` function. * Displaying the result from `query_wolfram_alpha` back to the user in your chat interface. **Example of integrating with a simple chat loop:** ```python # (Previous code from above goes here) if __name__ == "__main__": print("Welcome to the Wolfram Alpha Chat!") while True: user_query = input("You: ") if user_query.lower() == "exit": break result = query_wolfram_alpha(user_query) print("Wolfram Alpha:", result) print("Goodbye!") ``` **Important Considerations:** * **API Usage Limits:** Wolfram Alpha's API has usage limits. Be mindful of these limits to avoid being blocked. Check the Wolfram Alpha Developer Portal for details on the limits for your API key type. * **Error Handling:** Implement robust error handling to gracefully handle network errors, API errors, and invalid user input. * **Security:** **Never** hardcode your API key directly into your code. Use environment variables or a secure configuration file. * **Rate Limiting:** Consider implementing rate limiting on your side to prevent users from overwhelming the Wolfram Alpha API. * **Asynchronous Operations:** For more complex chat applications, consider using asynchronous operations (e.g., `asyncio` in Python) to avoid blocking the main thread while waiting for the Wolfram Alpha API to respond. * **API Response Format:** The example uses XML output. You can also request JSON output, which might be easier to parse in some cases. Change the `output` parameter in the `params` dictionary to `"JSON"` if you want JSON output. You'll need to adjust the parsing logic accordingly. * **Wolfram Language:** For very complex tasks, you might consider using the Wolfram Language directly (if you have a Wolfram Engine license). This gives you more control over the computation. This comprehensive guide should help you connect your chat repl to Wolfram Alpha. Remember to replace the placeholder API key with your actual key and adapt the code to fit your specific chat application. Good luck!