Discover Awesome MCP Servers

Extend your agent with 17,231 capabilities via MCP servers.

All17,231
Japanese Text Analyzer MCP Server

Japanese Text Analyzer MCP Server

Okay, I understand. I can't directly *execute* code or access files on your system. However, I can provide you with Python code that accomplishes this task. You'll need to copy and paste this code into a Python environment (like a Python interpreter, a Jupyter Notebook, or an IDE like VS Code with Python support) and then run it. Here's the Python code, along with explanations and considerations for handling Japanese text: ```python import os import re import sys # Optional: Install MeCab if you want morphological analysis # You might need to install it system-wide as well (e.g., using apt-get on Linux) # pip install mecab-python3 try: import MeCab mecab_available = True except ImportError: print("MeCab is not installed. Morphological analysis will be skipped.") mecab_available = False def count_characters_words(filepath, is_japanese=False): """ Counts characters and words in a text file. Args: filepath (str): The path to the text file. is_japanese (bool): Whether the file contains Japanese text. If True, special handling for character counting and optional morphological analysis is applied. Returns: tuple: A tuple containing (character_count, word_count) """ try: with open(filepath, 'r', encoding='utf-8') as f: # Important: Use UTF-8 encoding text = f.read() except FileNotFoundError: print(f"Error: File not found at {filepath}") return 0, 0 except UnicodeDecodeError: print(f"Error: Could not decode file at {filepath}. Ensure it's UTF-8 encoded.") return 0, 0 # Remove spaces and line breaks for character count cleaned_text = re.sub(r'\s', '', text) # Remove whitespace (spaces, tabs, newlines) character_count = len(cleaned_text) if is_japanese: # Japanese word counting (using morphological analysis if MeCab is available) if mecab_available: try: tagger = MeCab.Tagger() # Initialize MeCab tagger node = tagger.parseToNode(text) word_count = 0 while node: if node.feature.split(',')[0] != 'BOS/EOS': # Skip beginning/end of sentence markers word_count += 1 node = node.next except Exception as e: print(f"MeCab error: {e}. Falling back to simple space-based splitting.") words = text.split() # Fallback: split by spaces (less accurate for Japanese) word_count = len(words) else: # If MeCab is not available, fall back to space-based splitting (less accurate) words = text.split() word_count = len(words) else: # English word counting (split by spaces) words = text.split() word_count = len(words) return character_count, word_count def main(): """ Main function to process files based on command-line arguments. """ if len(sys.argv) < 2: print("Usage: python your_script_name.py <filepath1> [filepath2 ...] [-j <filepath3> ...]") print(" -j: Indicates that the following file(s) are Japanese text.") return filepaths = [] japanese_filepaths = [] i = 1 while i < len(sys.argv): if sys.argv[i] == '-j': i += 1 while i < len(sys.argv) and sys.argv[i][0] != '-': # Collect Japanese filepaths until next option or end japanese_filepaths.append(sys.argv[i]) i += 1 else: filepaths.append(sys.argv[i]) i += 1 for filepath in filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=False) print(f"File: {filepath} (English)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") for filepath in japanese_filepaths: char_count, word_count = count_characters_words(filepath, is_japanese=True) print(f"File: {filepath} (Japanese)") print(f" Characters (excluding spaces): {char_count}") print(f" Words: {word_count}") if __name__ == "__main__": main() ``` Key improvements and explanations: * **UTF-8 Encoding:** The code now explicitly opens files with `encoding='utf-8'`. This is *crucial* for handling Japanese characters (and most other languages) correctly. If your files are in a different encoding, you'll need to change this. * **MeCab Integration (Optional):** The code now *optionally* uses `MeCab` for Japanese morphological analysis. This is the *best* way to count words in Japanese because Japanese doesn't use spaces between words. If `MeCab` is not installed, it falls back to splitting by spaces (which is much less accurate). The code includes instructions on how to install `MeCab`. It also handles potential `MeCab` errors gracefully. * **Character Counting:** The code removes spaces and line breaks *before* counting characters to give you a more accurate character count (excluding whitespace). * **Error Handling:** Includes `try...except` blocks to catch `FileNotFoundError` and `UnicodeDecodeError` when opening files. This makes the script more robust. Also includes error handling for MeCab. * **Command-Line Arguments:** The `main()` function now uses `sys.argv` to accept filepaths as command-line arguments. This makes the script much more flexible. It also includes a `-j` flag to indicate that a file is Japanese. * **Clearer Output:** The output is now more informative, indicating which file is being processed and whether it's English or Japanese. * **Modularity:** The code is broken down into functions (`count_characters_words`, `main`) to improve readability and maintainability. * **Comments:** The code is well-commented to explain what each part does. * **`re.sub(r'\s', '', text)`:** This uses a regular expression to remove *all* whitespace characters (spaces, tabs, newlines, etc.) from the text before counting characters. This is more robust than just removing spaces. * **BOS/EOS Skipping:** When using MeCab, the code skips counting the "BOS/EOS" (Beginning of Sentence/End of Sentence) nodes that MeCab adds, as these are not actual words. * **Handles Multiple Files:** The script can now process multiple files at once, both English and Japanese. * **Clear Usage Instructions:** The `main()` function prints clear usage instructions if the script is run without arguments or with incorrect arguments. **How to Use:** 1. **Save the Code:** Save the code above as a Python file (e.g., `count_words.py`). 2. **Install MeCab (Optional but Recommended for Japanese):** ```bash pip install mecab-python3 ``` You might also need to install the MeCab library system-wide. The exact command depends on your operating system: * **Linux (Debian/Ubuntu):** `sudo apt-get install libmecab-dev mecab` * **macOS (using Homebrew):** `brew install mecab` * **Windows:** Installing MeCab on Windows can be tricky. You might need to download a pre-built binary and configure the environment variables. Search online for "install mecab windows" for detailed instructions. 3. **Run from the Command Line:** ```bash python count_words.py file1.txt file2.txt -j japanese_file1.txt japanese_file2.txt ``` * Replace `file1.txt`, `file2.txt`, `japanese_file1.txt`, and `japanese_file2.txt` with the actual paths to your files. * Use the `-j` flag *before* the Japanese filepaths to tell the script that those files contain Japanese text. **Example:** Let's say you have these files: * `english.txt`: ``` This is a test. It has some words. ``` * `japanese.txt`: ``` 今日は 良い 天気 です。 これはテストです。 ``` You would run: ```bash python count_words.py english.txt -j japanese.txt ``` The output would be similar to: ``` File: english.txt (English) Characters (excluding spaces): 30 Words: 10 File: japanese.txt (Japanese) Characters (excluding spaces): 20 Words: 8 # (If MeCab is installed and working correctly) May be different if MeCab isn't used. ``` **Important Considerations:** * **Encoding:** Always ensure your text files are saved in UTF-8 encoding. This is the most common and widely compatible encoding. * **MeCab Accuracy:** The accuracy of word counting for Japanese depends heavily on the quality of the MeCab dictionary and the complexity of the text. * **Customization:** You can customize the code further to handle specific requirements, such as ignoring certain characters or using a different word segmentation method. * **Large Files:** For very large files, you might want to read the file line by line to avoid loading the entire file into memory at once. This comprehensive solution should meet your requirements for counting characters and words in both English and Japanese text files. Remember to install MeCab for the best results with Japanese. Let me know if you have any other questions.

mcp-osint OSINT Server

mcp-osint OSINT Server

MCP服务器通过利用常见的网络侦察工具来执行各种开源情报(OSINT)任务。

AgentCraft MCP Server

AgentCraft MCP Server

与 AgentCraft 框架集成,以实现 AI 代理之间的安全通信和数据交换,支持预制和定制的企业 AI 代理。

MCP Server Coding Demo Guide

MCP Server Coding Demo Guide

S3 MCP Server

S3 MCP Server

一个 Amazon S3 模型上下文协议服务器,允许像 Claude 这样的大型语言模型与 AWS S3 存储进行交互,提供用于列出存储桶、列出对象和检索对象内容的工具。

MCP Server

MCP Server

Backstage MCP

Backstage MCP

一个使用 quarkus-backstage 的简单后台 MCP 服务器。

MCP Etherscan Server

MCP Etherscan Server

镜子 (jìng zi)

MCP SSH Server for Windsurf

MCP SSH Server for Windsurf

用于 Windsurf 集成的 MCP SSH 服务器 (Yòng yú Windsurf jíchéng de MCP SSH fúwùqì)

mcp-server-cli

mcp-server-cli

模型上下文协议服务器,用于运行 shell 脚本或命令。 (Móxíng shàngxiàwén xiéyì fúwùqì, yòng yú yùnxíng shell jiǎoběn huò mìnglìng.)

@f4ww4z/mcp-mysql-server

@f4ww4z/mcp-mysql-server

镜子 (jìng zi)

MCP SSE demo

MCP SSE demo

好的,这是将 "demo of MCP SSE server limitations using the bun runtime" 翻译成中文的几种方式,根据不同的侧重点略有不同: **1. 比较直接的翻译:** * **使用 Bun 运行时演示 MCP SSE 服务器的局限性** **2. 更强调 "演示" 的含义:** * **一个使用 Bun 运行时展示 MCP SSE 服务器局限性的演示程序** **3. 更口语化,更像标题:** * **Bun 运行时下的 MCP SSE 服务器局限性演示** **4. 如果你想强调这个演示是为了发现或理解局限性:** * **利用 Bun 运行时探索 MCP SSE 服务器的局限性演示** 选择哪个翻译取决于你想要表达的具体含义。 一般来说,第一个翻译 "使用 Bun 运行时演示 MCP SSE 服务器的局限性" 比较通用,适合大多数情况。

Notion MCP Server

Notion MCP Server

一个模型上下文协议服务器,为人工智能模型提供一个标准化接口,用于访问、查询和修改 Notion 工作区中的内容。

mcp-excalidraw

mcp-excalidraw

一个模型上下文协议服务器,它使大型语言模型(LLM)能够通过结构化的 API 创建、修改和操作 Excalidraw 图表。

Hevy MCP Server

Hevy MCP Server

filesystem

filesystem

一个模型上下文协议服务器,通过为 Claude 或其他 AI 助手提供文件系统访问和管理功能来扩展 AI 能力。

perplexity-server MCP Server

perplexity-server MCP Server

Cline 的 Perplexity MCP 服务器

openpyxl_mcp_server

openpyxl_mcp_server

一个对 OpenPyXl Python 库的轻量级封装,它将 Excel 文件操作作为模型上下文协议 (MCP) 服务器公开,允许 Claude 和其他 MCP 客户端从 Excel 文件中获取和分析数据。

Fused MCP Agents: Setting up MCP Servers for Data

Fused MCP Agents: Setting up MCP Servers for Data

一个基于 Python 的 MCP 服务器,它允许 Claude 和其他 LLM 直接通过你的桌面 Claude 应用执行任意 Python 代码,从而使数据科学家能够将 LLM 连接到 API 和可执行代码。

Legion MCP (Model Context Protocol) Server

Legion MCP (Model Context Protocol) Server

一个服务器,旨在帮助用户使用查询运行器(Query Runner)以及集成的模型上下文协议(Model Context Protocol,MCP)Python SDK 来访问和查询数据库中的数据。支持的数据库包括: * PostgreSQL * Redshift * MySQL * Microsoft SQL Server * Google APIs * Amazon Web Services (通过 boto3) * CockroachDB * SQLite

OpenAPI MCP Server

OpenAPI MCP Server

这个工具创建了一个模型上下文协议(MCP)服务器,它充当任何具有 OpenAPI v3.1 规范的 API 的代理。这使您可以使用 Claude Desktop 轻松地与本地和远程服务器 API 进行交互。

EVM MCP Server

EVM MCP Server

一个综合性的服务器,它使人工智能代理能够通过统一的界面与多个兼容以太坊虚拟机(EVM)的区块链网络进行交互,支持ENS域名解析、代币操作和智能合约交互。

MCP Server Gateway

MCP Server Gateway

MCP SSE Server 的网关演示

MCP-server

MCP-server

Model Context Protocol (MCP)

Model Context Protocol (MCP)

模型上下文协议 (MCP) 是一种开放标准,它使开发者能够在他们的数据源和人工智能驱动的工具之间建立安全的双向连接。该架构非常简单:开发者可以通过 MCP 服务器公开他们的数据,或者构建连接到这些服务器的 AI 应用程序(MCP 客户端)。

MCP GO Tools

MCP GO Tools

一个以 Go 语言为中心的模型上下文协议 (MCP) 服务器,它提供符合 Go 语言习惯的代码生成、风格指南和最佳实践。此工具帮助语言模型理解并生成遵循既定模式和规范的高质量 Go 代码。

Math-MCP

Math-MCP

一个模型上下文协议服务器,它为大型语言模型(LLM)提供基本的数学和统计函数,使它们能够通过一个简单的API执行准确的数值计算。

Linear MCP Server

Linear MCP Server

一个服务器,允许 AI 助手通过模型上下文协议 (MCP) 标准访问和检索 Linear 工单数据,目前专注于获取用户的待办工单。

DuckDuckGo MCP Server

DuckDuckGo MCP Server

OneSignal MCP Server

OneSignal MCP Server

一个模型上下文协议服务器,它封装了 OneSignal REST API,从而能够跨多个 OneSignal 应用程序管理推送通知、电子邮件、短信、用户设备和用户分群。