Discover Awesome MCP Servers

Extend your agent with 14,392 capabilities via MCP servers.

All14,392
MCP Compliance

MCP Compliance

一个 MCP 服务器,用于支持 AI 代理中的合规性操作。 (Or, a slightly more formal/technical translation:) 一个 MCP 服务器,用于支持人工智能代理中的合规性操作。

dbx-mcp-server

dbx-mcp-server

一个模型上下文协议服务器,使人工智能应用能够与 Dropbox 互动,通过 Dropbox 的 API 提供文件操作、元数据检索、搜索和账户管理等工具。

S3 MCP Server

S3 MCP Server

一个 Amazon S3 模型上下文协议服务器,允许像 Claude 这样的大型语言模型与 AWS S3 存储进行交互,提供用于列出存储桶、列出对象和检索对象内容的工具。

MCP Etherscan Server

MCP Etherscan Server

镜子 (jìng zi)

MCP SSH Server for Windsurf

MCP SSH Server for Windsurf

用于 Windsurf 集成的 MCP SSH 服务器 (Yòng yú Windsurf jíchéng de MCP SSH fúwùqì)

mcp-server-cli

mcp-server-cli

模型上下文协议服务器,用于运行 shell 脚本或命令。 (Móxíng shàngxiàwén xiéyì fúwùqì, yòng yú yùnxíng shell jiǎoběn huò mìnglìng.)

G-Search MCP

G-Search MCP

一个强大的 MCP 服务器,能够同时使用多个关键词进行并行 Google 搜索,在处理 CAPTCHA 验证码和模拟用户浏览模式的同时,提供结构化的搜索结果。

@f4ww4z/mcp-mysql-server

@f4ww4z/mcp-mysql-server

镜子 (jìng zi)

MCP SSE demo

MCP SSE demo

好的,这是将 "demo of MCP SSE server limitations using the bun runtime" 翻译成中文的几种方式,根据不同的侧重点略有不同: **1. 比较直接的翻译:** * **使用 Bun 运行时演示 MCP SSE 服务器的局限性** **2. 更强调 "演示" 的含义:** * **一个使用 Bun 运行时展示 MCP SSE 服务器局限性的演示程序** **3. 更口语化,更像标题:** * **Bun 运行时下的 MCP SSE 服务器局限性演示** **4. 如果你想强调这个演示是为了发现或理解局限性:** * **利用 Bun 运行时探索 MCP SSE 服务器的局限性演示** 选择哪个翻译取决于你想要表达的具体含义。 一般来说,第一个翻译 "使用 Bun 运行时演示 MCP SSE 服务器的局限性" 比较通用,适合大多数情况。

ThemeParks.wiki API MCP Server

ThemeParks.wiki API MCP Server

主题公园维基 API MCP 服务器

reddit-mcp

reddit-mcp

Reddit 的 MCP 服务器。

MCP Server

MCP Server

DuckDuckGo MCP Server

DuckDuckGo MCP Server

OneSignal MCP Server

OneSignal MCP Server

一个模型上下文协议服务器,它封装了 OneSignal REST API,从而能够跨多个 OneSignal 应用程序管理推送通知、电子邮件、短信、用户设备和用户分群。

Backstage MCP

Backstage MCP

一个使用 quarkus-backstage 的简单后台 MCP 服务器。

Dynamic Shell Server

Dynamic Shell Server

一个模型上下文协议 (MCP) 服务器,它支持安全地执行 shell 命令,并具有动态审批系统。 该服务器允许运行任意命令,同时通过用户审批和审计日志记录来维护安全性。

Local Git MCP Server

Local Git MCP Server

quickchart-server MCP Server

quickchart-server MCP Server

一个 MCP 服务器,用于使用 QuickChart.io 生成可定制的数据可视化图表,支持多种图表类型和 Chart.js 配置。

Japanese Text Analyzer MCP Server

Japanese Text Analyzer MCP Server

Okay, I understand. I can't directly *execute* code or access files on your system. However, I can provide you with Python code that accomplishes this task. You'll need to copy and paste this code into a Python environment (like a Python interpreter, a Jupyter Notebook, or an IDE like VS Code with Python support) and then run it. Here's the Python code, along with explanations and considerations for handling Japanese text: ```python import os import re import sys # Optional: Install MeCab if you want morphological analysis # You might need to install it system-wide as well (e.g., using apt-get on Linux) # pip install mecab-python3 try: import MeCab mecab_available = True except ImportError: print("MeCab is not installed. Morphological analysis will be skipped.") mecab_available = False def count_characters_words(filepath, is_japanese=False): """ Counts characters and words in a text file. Args: filepath (str): The path to the text file. is_japanese (bool): Whether the file contains Japanese text. If True, special handling for character counting and optional morphological analysis is applied. Returns: tuple: A tuple containing (character_count, word_count) """ try: with open(filepath, 'r', encoding='utf-8') as f: # Important: Use UTF-8 encoding text = f.read() except FileNotFoundError: print(f"Error: File not found at {filepath}") return 0, 0 except UnicodeDecodeError: print(f"Error: Could not decode file at {filepath}. Ensure it's UTF-8 encoded.") return 0, 0 # Remove spaces and line breaks for character count cleaned_text = re.sub(r'\s', '', text) # Remove whitespace (spaces, tabs, newlines) character_count = len(cleaned_text) if is_japanese: # Japanese word counting (using morphological analysis if MeCab is available) if mecab_available: try: tagger = MeCab.Tagger() # Initialize MeCab tagger node = tagger.parseToNode(text) word_count = 0 while node: if node.feature.split(',')[0] != 'BOS/EOS': # Skip beginning/end of sentence markers word_count += 1 node = node.next except Exception as e: print(f"MeCab error: {e}. Falling back to simple space-based splitting.") words = text.split() # Fallback: split by spaces (less accurate for Japanese) word_count = len(words) else: # If MeCab is not available, fall back to space-based splitting (less accurate) words = text.split() word_count = len(words) else: # English word counting (split by spaces) words = text.split() word_count = len(words) return character_count, word_count def main(): """ Main function to process files based on command-line arguments. """ if len(sys.argv) < 2: print("Usage: python your_script_name.py <filepath1> [filepath2 ...] [-j <filepath3> ...]") print(" -j: Indicates that the following file(s) are Japanese text.") return filepaths = [] japanese_filepaths = [] i = 1 while i < len(sys.argv): if sys.argv[i] == '-j': i += 1 while i < len(sys.argv) and sys.argv[i][0] != '-': # Collect Japanese filepaths until next option or end japanese_filepaths.append(sys.argv[i]) i += 1 else: filepaths.append(sys.argv[i]) i += 1 for filepath in filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=False) print(f"File: {filepath} (English)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") for filepath in japanese_filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=True) print(f"File: {filepath} (Japanese)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") if __name__ == "__main__": main() ``` Key improvements and explanations: * **UTF-8 Encoding:** The code now explicitly opens files with `encoding='utf-8'`. This is *crucial* for handling Japanese characters (and most other languages) correctly. If your files are in a different encoding, you'll need to change this. * **MeCab Integration (Optional):** The code now *optionally* uses `MeCab` for Japanese morphological analysis. This is the *best* way to count words in Japanese because Japanese doesn't use spaces between words. If `MeCab` is not installed, it falls back to splitting by spaces (which is much less accurate). The code includes instructions on how to install `MeCab`. It also handles potential `MeCab` errors gracefully. * **Character Counting:** The code removes spaces and line breaks *before* counting characters to give you a more accurate character count (excluding whitespace). * **Error Handling:** Includes `try...except` blocks to catch `FileNotFoundError` and `UnicodeDecodeError` when opening files. This makes the script more robust. Also includes error handling for MeCab. * **Command-Line Arguments:** The `main()` function now uses `sys.argv` to accept filepaths as command-line arguments. This makes the script much more flexible. It also includes a `-j` flag to indicate that a file is Japanese. * **Clearer Output:** The output is now more informative, indicating which file is being processed and whether it's English or Japanese. * **Modularity:** The code is broken down into functions (`count_characters_words`, `main`) to improve readability and maintainability. * **Comments:** The code is well-commented to explain what each part does. * **`re.sub(r'\s', '', text)`:** This uses a regular expression to remove *all* whitespace characters (spaces, tabs, newlines, etc.) from the text before counting characters. This is more robust than just removing spaces. * **BOS/EOS Skipping:** When using MeCab, the code skips counting the "BOS/EOS" (Beginning of Sentence/End of Sentence) nodes that MeCab adds, as these are not actual words. * **Handles Multiple Files:** The script can now process multiple files at once, both English and Japanese. * **Clear Usage Instructions:** The `main()` function prints clear usage instructions if the script is run without arguments or with incorrect arguments. **How to Use:** 1. **Save the Code:** Save the code above as a Python file (e.g., `count_words.py`). 2. **Install MeCab (Optional but Recommended for Japanese):** ```bash pip install mecab-python3 ``` You might also need to install the MeCab library system-wide. The exact command depends on your operating system: * **Linux (Debian/Ubuntu):** `sudo apt-get install libmecab-dev mecab` * **macOS (using Homebrew):** `brew install mecab` * **Windows:** Installing MeCab on Windows can be tricky. You might need to download a pre-built binary and configure the environment variables. Search online for "install mecab windows" for detailed instructions. 3. **Run from the Command Line:** ```bash python count_words.py file1.txt file2.txt -j japanese_file1.txt japanese_file2.txt ``` * Replace `file1.txt`, `file2.txt`, `japanese_file1.txt`, and `japanese_file2.txt` with the actual paths to your files. * Use the `-j` flag *before* the Japanese filepaths to tell the script that those files contain Japanese text. **Example:** Let's say you have these files: * `english.txt`: ``` This is a test. It has some words. ``` * `japanese.txt`: ``` 今日は 良い 天気 です。 これはテストです。 ``` You would run: ```bash python count_words.py english.txt -j japanese.txt ``` The output would be similar to: ``` File: english.txt (English) Characters (excluding spaces): 30 Words: 10 File: japanese.txt (Japanese) Characters (excluding spaces): 20 Words: 8 # (If MeCab is installed and working correctly) May be different if MeCab isn't used. ``` **Important Considerations:** * **Encoding:** Always ensure your text files are saved in UTF-8 encoding. This is the most common and widely compatible encoding. * **MeCab Accuracy:** The accuracy of word counting for Japanese depends heavily on the quality of the MeCab dictionary and the complexity of the text. * **Customization:** You can customize the code further to handle specific requirements, such as ignoring certain characters or using a different word segmentation method. * **Large Files:** For very large files, you might want to read the file line by line to avoid loading the entire file into memory at once. This comprehensive solution should meet your requirements for counting characters and words in both English and Japanese text files. Remember to install MeCab for the best results with Japanese. Let me know if you have any other questions.

grobid-MCP-Server-

grobid-MCP-Server-

PubMed Enhanced Search Server

PubMed Enhanced Search Server

支持从 PubMed 数据库搜索和检索学术论文,并提供高级功能,如 MeSH 术语查找、出版物统计和基于 PICO 的证据搜索。

MCP Server Gateway

MCP Server Gateway

MCP SSE Server 的网关演示

MCP-server

MCP-server

Notion MCP Server

Notion MCP Server

一个模型上下文协议服务器,为人工智能模型提供一个标准化接口,用于访问、查询和修改 Notion 工作区中的内容。

filesystem

filesystem

一个模型上下文协议服务器,通过为 Claude 或其他 AI 助手提供文件系统访问和管理功能来扩展 AI 能力。

OpenAPI MCP Server

OpenAPI MCP Server

这个工具创建了一个模型上下文协议(MCP)服务器,它充当任何具有 OpenAPI v3.1 规范的 API 的代理。这使您可以使用 Claude Desktop 轻松地与本地和远程服务器 API 进行交互。

Hevy MCP Server

Hevy MCP Server

EVM MCP Server

EVM MCP Server

一个综合性的服务器,它使人工智能代理能够通过统一的界面与多个兼容以太坊虚拟机(EVM)的区块链网络进行交互,支持ENS域名解析、代币操作和智能合约交互。

Kafka MCP Server

Kafka MCP Server

通过一个标准化的接口,使人工智能模型能够发布和消费来自 Apache Kafka 主题的消息,从而轻松地将 Kafka 消息传递与 LLM 和代理应用程序集成。

Mcp Server Chatsum

Mcp Server Chatsum

Okay, I can help with that. To summarize WeChat messages, I need you to provide me with the text of the messages. Please paste the WeChat conversation here, and I will do my best to provide a concise and accurate summary in Chinese. For example, you can paste something like this: **Example Input:** ``` Person A: Hey, are you free for lunch tomorrow? Person B: Yeah, I think so. Where do you want to go? Person A: How about that new Italian place downtown? Person B: Sounds good! What time? Person A: Noon? Person B: Perfect! See you then. ``` Then I will provide a summary in Chinese. **The more context you give me, the better the summary will be.** For example, if you tell me the topic of the conversation beforehand, I can focus the summary on that. Looking forward to helping you!