Discover Awesome MCP Servers

Extend your agent with 14,392 capabilities via MCP servers.

All14,392
MCP TypeScript Simple Template

MCP TypeScript Simple Template

一个用于创建自定义模型上下文协议服务器的基础,该服务器可以与 AI 系统集成,并提供一个简单的 BMI(身体质量指数)计算器工具作为示例实现。

MCPE Server Proxy

MCPE Server Proxy

镜子 (jìng zi)

Vercel MCP Integration

Vercel MCP Integration

MCP 服务器连接 Claude 到 Vercel (MCP 服务器连接 Claude 至 Vercel) - This is a more formal and slightly more common way to say it.

Unichat MCP Server in TypeScript

Unichat MCP Server in TypeScript

镜子 (jìng zi)

web-search-mcp

web-search-mcp

网络搜索的 MCP 服务器

LLM_MCP

LLM_MCP

构建用于 LLM 的 MCP 客户端和服务器 (Gòu jiàn yòng yú LLM de MCP kèhùduān hé fúwùqì)

MCP Proxy

MCP Proxy

一个为使用标准输入输出 (stdio) 传输的 MCP 服务器设计的 TypeScript SSE 代理。

Mac Shell MCP Server

Mac Shell MCP Server

一个 MCP 服务器,允许通过 Claude 或 Roo Code 安全地执行 macOS 终端命令,并内置安全白名单和审批机制。

Terminal MCP Server

Terminal MCP Server

一个 MCP 服务器,允许 AI 模型通过 SSH 在本地机器或远程主机上执行系统命令,支持持久会话和环境变量。

Websearch

Websearch

一个模型上下文协议服务器,它使 AI 助手能够执行实时网络搜索,并通过爬虫 API 从互联网检索最新的信息。

mcp-server-espn

mcp-server-espn

mcp-server-agentops

mcp-server-agentops

AgentOps SDK 的官方 MCP 服务器

Qase MCP Server

Qase MCP Server

一个模型上下文协议服务器,与 Qase 测试管理平台集成,允许用户创建和检索测试用例、管理测试运行以及与 Qase 项目交互。

Coolify MCP Server

Coolify MCP Server

一个模型上下文协议服务器,它使 AI 助手能够通过自然语言与 Coolify 实例交互,从而管理服务器、应用程序、数据库和部署。

FridayAI

FridayAI

帮助完成任务的AI游戏伙伴。

mcp-server

mcp-server

ming-mcp-server MCP Server

ming-mcp-server MCP Server

Wandering RAG

Wandering RAG

一个个人 RAG 的 CLI 工具,可以从存储在 Qdrant 中的 Notion、Obsidian、Apple Notes 等数据中检索信息,并将其作为 MCP 服务器公开。

mcp-voice

mcp-voice

用于语音人工智能的 MCP 服务器,使用 OpenAI (Yòng yú yǔyīn réngōng zhìnéng de MCP fúwùqì, shǐyòng OpenAI)

MCP Servers - OpenAI and Flux Integration

MCP Servers - OpenAI and Flux Integration

A Pokedex web app!

A Pokedex web app!

宝可梦图鉴 Web 应用 (Bǎokěmèng tújiàn Web yìngyòng) Alternatively, a more concise translation: 宝可梦图鉴网页应用 (Bǎokěmèng tújiàn wǎngyè yìngyòng) Both translations convey the meaning of a Pokedex web application. The first is slightly more general, while the second specifically mentions it's a web *page* application.

Multi Model Advisor

Multi Model Advisor

决策模型委员会 / 决策模型理事会

Tiny MCP Server (Rust)

Tiny MCP Server (Rust)

一个用 Rust 实现的机器通信协议 (MCP)。

Design Tokens - MCP Server

Design Tokens - MCP Server

好的,这是对您要求的翻译: 一个用于 Figma 的 MCP 服务器。它也可以创建图像。 (Pinyin: Yī gè yòng yú Figma de MCP fúwùqì. Tā yě kěyǐ chuàngjiàn túxiàng.)

PortOne MCP Server

PortOne MCP Server

面向开发者的 PortOne MCP 服务器

Unity MCP Server - Enhancing Unity Editor Actions with MCP Clients 🎮

Unity MCP Server - Enhancing Unity Editor Actions with MCP Clients 🎮

一个 Unity MCP 服务器,允许像 Claude Desktop 或 Cursor 这样的 MCP 客户端执行 Unity 编辑器操作。

Lighthouse MCP

Lighthouse MCP

一个模型上下文协议服务器,使 Claude 能够通过安全认证与您的 Lighthouse.one 加密货币投资组合数据进行交互和分析。

YouTube Transcript MCP Server

YouTube Transcript MCP Server

There are a few ways to approach building an MCP (Microservices Communication Protocol) server for fetching YouTube transcripts. Here's a breakdown of the concepts and potential implementations: **Understanding the Requirements** * **YouTube Data API:** You'll need to use the YouTube Data API to retrieve transcript information. This API requires authentication (API key or OAuth 2.0). * **Transcript Retrieval:** The API provides different ways to get transcripts: * **Automatic Transcripts (ASR):** Generated by YouTube's automatic speech recognition. These are often less accurate. * **Community Contributions:** Transcripts provided by the YouTube community. * **Official Transcripts:** Transcripts uploaded by the video creator. * **MCP (Microservices Communication Protocol):** This defines how your server will communicate with other microservices in your architecture. Common choices include: * **REST (HTTP):** Simple, widely understood. Good for basic operations. * **gRPC:** High-performance, uses Protocol Buffers for data serialization. Excellent for complex data structures and demanding performance. * **Message Queues (e.g., RabbitMQ, Kafka):** Asynchronous communication. Useful for decoupling services and handling large volumes of requests. * **Scalability and Reliability:** Consider how your server will handle a large number of requests and potential failures. * **Error Handling:** Implement robust error handling to gracefully deal with API errors, network issues, and invalid requests. **Implementation Options** Here are a few implementation options, focusing on different MCP approaches: **1. REST (HTTP) based MCP Server (Python with Flask/FastAPI)** * **Language:** Python (popular for API development) * **Framework:** Flask (simple) or FastAPI (modern, asynchronous) ```python # FastAPI example from fastapi import FastAPI, HTTPException from youtube_transcript_api import YouTubeTranscriptApi app = FastAPI() @app.get("/transcript/{video_id}") async def get_transcript(video_id: str, lang: str = 'en'): """ Fetches the transcript for a YouTube video. Args: video_id: The YouTube video ID. lang: The desired language of the transcript (default: 'en'). Returns: A list of transcript entries (text, start, duration). """ try: transcript = YouTubeTranscriptApi.get_transcript(video_id, languages=[lang]) return transcript except Exception as e: raise HTTPException(status_code=500, detail=str(e)) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` **Explanation:** * **`YouTubeTranscriptApi`:** This is a Python library that simplifies interacting with the YouTube transcript API. Install it with `pip install youtube-transcript-api`. * **`FastAPI`:** A modern, high-performance web framework for building APIs. * **`/transcript/{video_id}`:** An endpoint that accepts the YouTube video ID as a path parameter. * **`lang`:** An optional query parameter to specify the desired language. * **Error Handling:** The `try...except` block catches potential errors and returns an HTTP 500 error with a descriptive message. * **`uvicorn`:** An ASGI server to run the FastAPI application. **To use this:** 1. Install dependencies: `pip install fastapi uvicorn youtube-transcript-api` 2. Run the server: `python your_script_name.py` 3. Access the API: `http://localhost:8000/transcript/VIDEO_ID` (replace `VIDEO_ID` with the actual YouTube video ID). You can also specify the language: `http://localhost:8000/transcript/VIDEO_ID?lang=fr` **2. gRPC based MCP Server (Python with gRPC)** * **Language:** Python * **Framework:** gRPC **Steps:** 1. **Define the Protocol Buffer (.proto) file:** This defines the service and message structure. ```protobuf syntax = "proto3"; package youtube_transcript; service TranscriptService { rpc GetTranscript (TranscriptRequest) returns (TranscriptResponse) {} } message TranscriptRequest { string video_id = 1; string language = 2; } message TranscriptResponse { repeated TranscriptEntry entries = 1; } message TranscriptEntry { string text = 1; double start = 2; double duration = 3; } ``` 2. **Generate gRPC code:** Use the `grpc_tools.protoc` compiler to generate Python code from the `.proto` file. ```bash python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. youtube_transcript.proto ``` 3. **Implement the gRPC server:** ```python # youtube_transcript_server.py import grpc from concurrent import futures from youtube_transcript_api import YouTubeTranscriptApi import youtube_transcript_pb2 as youtube_transcript_pb2 import youtube_transcript_pb2_grpc as youtube_transcript_pb2_grpc class TranscriptServicer(youtube_transcript_pb2_grpc.TranscriptServiceServicer): def GetTranscript(self, request, context): try: transcript = YouTubeTranscriptApi.get_transcript(request.video_id, languages=[request.language]) entries = [] for entry in transcript: entries.append(youtube_transcript_pb2.TranscriptEntry( text=entry['text'], start=entry['start'], duration=entry['duration'] )) return youtube_transcript_pb2.TranscriptResponse(entries=entries) except Exception as e: context.abort(grpc.StatusCode.INTERNAL, str(e)) def serve(): server = grpc.server(futures.ThreadPoolExecutor(max_workers=10)) youtube_transcript_pb2_grpc.add_TranscriptServiceServicer_to_server(TranscriptServicer(), server) server.add_insecure_port('[::]:50051') server.start() server.wait_for_termination() if __name__ == '__main__': serve() ``` 4. **Implement the gRPC client (example):** ```python # youtube_transcript_client.py import grpc import youtube_transcript_pb2 as youtube_transcript_pb2 import youtube_transcript_pb2_grpc as youtube_transcript_pb2_grpc def get_transcript(video_id, language): with grpc.insecure_channel('localhost:50051') as channel: stub = youtube_transcript_pb2_grpc.TranscriptServiceStub(channel) request = youtube_transcript_pb2.TranscriptRequest(video_id=video_id, language=language) try: response = stub.GetTranscript(request) for entry in response.entries: print(f"[{entry.start:.2f} - {entry.start + entry.duration:.2f}] {entry.text}") except grpc.RpcError as e: print(f"Error: {e.details()}") if __name__ == '__main__': get_transcript("VIDEO_ID", "en") # Replace with a real video ID ``` **Explanation:** * **`.proto` file:** Defines the service (`TranscriptService`) and the messages (`TranscriptRequest`, `TranscriptResponse`, `TranscriptEntry`). * **`grpc_tools.protoc`:** Compiles the `.proto` file into Python code. * **`TranscriptServicer`:** Implements the `GetTranscript` method, which retrieves the transcript using `YouTubeTranscriptApi` and converts it into the gRPC response format. * **gRPC Client:** Connects to the server, sends a `TranscriptRequest`, and prints the received transcript entries. **To use this:** 1. Install dependencies: `pip install grpcio grpcio-tools protobuf youtube-transcript-api` 2. Compile the `.proto` file: `python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. youtube_transcript.proto` 3. Run the server: `python youtube_transcript_server.py` 4. Run the client: `python youtube_transcript_client.py` **3. Message Queue based MCP Server (Python with RabbitMQ/Kafka)** * **Language:** Python * **Message Queue:** RabbitMQ or Kafka **Conceptual Outline (RabbitMQ Example):** 1. **Producer (Client):** Sends a message to the queue with the video ID and language. 2. **Consumer (Server):** Listens to the queue, receives the message, fetches the transcript, and potentially publishes the transcript to another queue or stores it in a database. **RabbitMQ Example (Simplified):** * **Producer (Client):** ```python # producer.py import pika import json connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() channel.queue_declare(queue='transcript_requests') message = {'video_id': 'VIDEO_ID', 'language': 'en'} # Replace with a real video ID channel.basic_publish(exchange='', routing_key='transcript_requests', body=json.dumps(message)) print(" [x] Sent %r" % message) connection.close() ``` * **Consumer (Server):** ```python # consumer.py import pika import json from youtube_transcript_api import YouTubeTranscriptApi connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() channel.queue_declare(queue='transcript_requests') def callback(ch, method, properties, body): message = json.loads(body.decode('utf-8')) video_id = message['video_id'] language = message['language'] print(f" [x] Received request for video ID: {video_id}, language: {language}") try: transcript = YouTubeTranscriptApi.get_transcript(video_id, languages=[language]) # Process the transcript (e.g., store in a database, publish to another queue) print(f" [x] Transcript fetched successfully for {video_id}") # Example: Print the first few lines for i in range(min(5, len(transcript))): print(transcript[i]) except Exception as e: print(f" [x] Error fetching transcript: {e}") channel.basic_consume(queue='transcript_requests', on_message_callback=callback, auto_ack=True) print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() ``` **Explanation:** * **RabbitMQ:** A message broker that allows asynchronous communication between services. * **`transcript_requests` queue:** The queue where the client sends requests for transcripts. * **Producer:** Sends a JSON message containing the video ID and language to the queue. * **Consumer:** Listens to the queue, retrieves the message, fetches the transcript using `YouTubeTranscriptApi`, and processes the transcript. * **`auto_ack=True`:** Automatically acknowledges the message after it's processed. Consider using manual acknowledgements for more robust error handling. **To use this:** 1. Install RabbitMQ: Follow the instructions on the RabbitMQ website. 2. Install dependencies: `pip install pika youtube-transcript-api` 3. Run the consumer: `python consumer.py` 4. Run the producer: `python producer.py` **Key Considerations and Best Practices** * **API Key Management:** Store your YouTube Data API key securely (e.g., environment variables, secrets management). Never hardcode it in your code. * **Rate Limiting:** The YouTube Data API has rate limits. Implement retry logic with exponential backoff to handle rate limit errors gracefully. Consider caching transcripts to reduce API calls. * **Error Handling:** Implement comprehensive error handling to catch API errors, network issues, and invalid requests. Log errors for debugging. * **Asynchronous Operations:** For gRPC and message queue implementations, use asynchronous operations (e.g., `asyncio` in Python) to improve performance and scalability. * **Data Validation:** Validate the input (video ID, language) to prevent errors and security vulnerabilities. * **Logging:** Use a logging library (e.g., `logging` in Python) to log important events and errors. * **Monitoring:** Monitor the performance of your server (e.g., request latency, error rates) to identify and address issues. * **Security:** If your server handles sensitive data, implement appropriate security measures (e.g., authentication, authorization, encryption). * **Scalability:** Design your server to be scalable to handle a large number of requests. Consider using a load balancer and multiple instances of your server. * **Deployment:** Choose a suitable deployment environment (e.g., cloud platform, containerization with Docker). * **Caching:** Implement caching mechanisms (e.g., Redis, Memcached) to store frequently accessed transcripts and reduce the load on the YouTube Data API. Consider using a cache invalidation strategy. * **Transcript Availability:** Not all YouTube videos have transcripts available. Handle cases where a transcript is not found. * **Language Support:** The `YouTubeTranscriptApi` library supports multiple languages. Allow users to specify the desired language. * **Transcript Types:** Consider supporting different types of transcripts (automatic, community, official). The `YouTubeTranscriptApi` library provides methods to access different transcript types. **Choosing the Right Approach** * **REST (HTTP):** Good for simple use cases and when you need a widely accessible API. Easy to implement and debug. * **gRPC:** Best for high-performance communication between microservices. Requires more setup but offers significant performance benefits. * **Message Queue:** Ideal for asynchronous processing and decoupling services. Useful for handling large volumes of requests and ensuring that requests are processed even if one service is temporarily unavailable. The best approach depends on your specific requirements and the overall architecture of your microservices. Start with REST if you're unsure, and then consider gRPC or message queues if you need better performance or scalability. Remember to prioritize security, error handling, and rate limiting in all implementations.

MCP Server for MySQL based on NodeJS

MCP Server for MySQL based on NodeJS

一个模型上下文协议服务器,提供对 MySQL 数据库的只读访问,使大型语言模型 (LLM) 能够检查数据库模式并执行只读查询。

Payman AI Documentation MCP Server

Payman AI Documentation MCP Server

为 Claude 或 Cursor 等 AI 助手提供访问 Payman AI 文档的权限,帮助开发者更高效地构建集成。