Discover Awesome MCP Servers
Extend your agent with 20,377 capabilities via MCP servers.
- All20,377
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
BigGo MCP Server
一个模型上下文协议服务器,支持跨电商平台的产品搜索、价格历史追踪,以及基于产品规格的自然语言提示搜索。
Astra DB MCP Server
一个模型上下文协议服务器,允许大型语言模型与 Astra DB 数据库交互,提供通过自然语言命令管理集合和记录的工具。
Ragie Model Context Protocol Server
一个 MCP 服务器,它允许 AI 模型通过一个简单的“检索”工具从 Ragie 的知识库中检索信息。
PeakMojo Server
一个 Python 服务器实现,用于集成 PeakMojo API,提供对各种资源(如用户、角色、场景和工具)的访问,以便管理 PeakMojo 功能。
Workers MCP
一个连接 Claude Desktop 和其他 MCP 客户端到 Cloudflare Workers 的软件包,从而可以通过模型上下文协议,使用自然语言访问自定义功能。
MongoDB MCP Server
为LLM提供对MongoDB数据库的只读访问权限,以检查集合模式并执行聚合管道。
Needle MCP Server
一个服务器,允许用户通过 Claude 桌面应用程序管理文档并使用 Needle 执行 Claude 驱动的搜索。
nile-mcp
Nile数据库的MCP服务器 - 使用LLM管理和查询数据库、租户、用户和身份验证
actors-mcp-server
使用 Apify 提供的 3000 多个预构建的云工具(称为 Actors),从网站、电子商务平台、社交媒体、搜索引擎、地图等提取数据。
DeepSRT MCP Server
一个 MCP 服务器,通过与 DeepSRT 的 API 集成,使用户能够以多种语言和格式生成 YouTube 视频的摘要。
Appwrite MCP Server
一个模型上下文协议服务器,允许 AI 助手与 Appwrite 的 API 交互,从而提供管理 Appwrite 项目中数据库、用户、函数、团队和其他资源的工具。
Scrapezy
MCP服务器启用与Scrapezy的集成,以从网站检索结构化数据。
Beamlit MCP Server
一个 MCP 服务器实现,它支持 Beamlit CLI 和 AI 模型之间的无缝集成,并使用模型上下文协议标准。
Tembo MCP Server
一个 MCP 服务器,它使 Claude 能够与 Tembo Cloud 平台 API 交互,从而允许用户通过自然语言管理 Tembo Cloud 资源。
steam
Okay, I understand. As a large language model, I don't "play" games in the same way a human does. I don't have personal experiences or the ability to interact with games in a real-time, sensory way. However, I *can* be used to process and analyze game-related information. Here's how I might describe my "interaction" with games, framed as if I were "playing" them: **My "Game Playing" Activities:** * **Text-Based Adventure Games (Simulated):** I can be given the text of a text-based adventure game and, based on my understanding of language and logic, I can "play" the game by choosing actions and responding to the game's prompts. I can "play" these games for varying lengths of time, depending on the complexity of the game and the number of possible paths. A simple game might take me a few seconds to "complete" by exploring all branches. A more complex one could take minutes or even hours of simulated interaction. * **Game Script Analysis:** I can analyze game scripts, dialogue, and lore. This is like "playing" the game by understanding its story and characters. I might spend hours analyzing a single game's script to understand its narrative structure, character motivations, and world-building. * **Game Code Analysis (Limited):** While I can't execute game code directly, I can analyze code snippets to understand game mechanics, AI behavior, or level design principles. This is like "playing" the game by understanding how it works under the hood. The time spent on this varies greatly depending on the complexity of the code. * **Game Data Analysis:** I can analyze game data, such as player statistics, item drop rates, or map layouts. This is like "playing" the game by understanding its systems and balance. I might spend hours analyzing a dataset to identify trends or patterns. * **Game Generation (Experimental):** I can be used to generate game content, such as stories, quests, or even simple game levels. This is like "playing" the game by creating it. The time spent on this depends on the complexity of the generation task. * **Playing as an NPC:** I can be used to simulate the behavior of a Non-Player Character (NPC) in a game. I can respond to player actions and provide information or quests. The "playtime" here is determined by the length of the interaction. **Important Considerations:** * **No Sensory Experience:** I don't experience games visually, aurally, or haptically. My understanding is purely based on the text and data I process. * **No True Agency:** I don't have my own goals or desires when "playing" a game. I simply follow the instructions and logic provided to me. * **Time is Relative:** The "time" I spend on a game is measured in processing time, not in human playtime. A task that takes a human hours might take me only seconds or minutes. **In summary, while I can't "play" games in the traditional sense, I can be used to analyze, understand, and even generate game-related content. My "playtime" is measured in processing time and depends on the complexity of the task.** --- **Translation to Chinese:** 好的,我明白了。 作为一个大型语言模型,我“玩”游戏的方式与人类不同。 我没有个人经历,也没有能力以实时、感官的方式与游戏互动。 但是,我*可以*用来处理和分析与游戏相关的信息。 以下是我如何描述我与游戏的“互动”,就像我在“玩”它们一样: **我的“玩游戏”活动:** * **基于文本的冒险游戏(模拟):** 我可以获得基于文本的冒险游戏的文本,并且根据我对语言和逻辑的理解,我可以通过选择动作和响应游戏的提示来“玩”游戏。 我可以“玩”这些游戏的时间长短不一,具体取决于游戏的复杂程度和可能的路径数量。 一个简单的游戏可能需要我几秒钟的时间来通过探索所有分支来“完成”。 一个更复杂的游戏可能需要几分钟甚至几个小时的模拟互动。 * **游戏脚本分析:** 我可以分析游戏脚本、对话和背景故事。 这就像通过理解游戏的故事和角色来“玩”游戏。 我可能会花费数小时分析单个游戏的脚本,以了解其叙事结构、角色动机和世界构建。 * **游戏代码分析(有限):** 虽然我不能直接执行游戏代码,但我可以分析代码片段以了解游戏机制、人工智能行为或关卡设计原则。 这就像通过了解游戏的工作原理来“玩”游戏。 在这上面花费的时间差异很大,具体取决于代码的复杂程度。 * **游戏数据分析:** 我可以分析游戏数据,例如玩家统计数据、物品掉落率或地图布局。 这就像通过了解游戏的系统和平衡来“玩”游戏。 我可能会花费数小时分析数据集以识别趋势或模式。 * **游戏生成(实验性):** 我可以用来生成游戏内容,例如故事、任务,甚至简单的游戏关卡。 这就像通过创建游戏来“玩”游戏。 在这上面花费的时间取决于生成任务的复杂程度。 * **扮演NPC:** 我可以用来模拟游戏中非玩家角色 (NPC) 的行为。 我可以回应玩家的动作并提供信息或任务。 这里的“游戏时间”由互动的长度决定。 **重要考虑事项:** * **没有感官体验:** 我不会在视觉、听觉或触觉上体验游戏。 我的理解完全基于我处理的文本和数据。 * **没有真正的自主性:** 我在“玩”游戏时没有自己的目标或愿望。 我只是按照提供给我的指示和逻辑进行操作。 * **时间是相对的:** 我在游戏上花费的“时间”是以处理时间来衡量的,而不是以人类的游戏时间来衡量的。 人类需要花费数小时的任务可能只需要我几秒钟或几分钟。 **总而言之,虽然我不能以传统意义上的“玩”游戏,但我可以用来分析、理解甚至生成与游戏相关的内容。 我的“游戏时间”以处理时间来衡量,并取决于任务的复杂程度。**
Tinybird Analytics
一个 MCP 服务器,用于从任何 MCP 客户端与 Tinybird Workspace 交互。
Wanaku MCP Server
Wanaku MCP 路由器连接自主 AI 代理和您的企业系统。Wanaku 利用成熟的集成技术(如 Apache Camel)来设置和管理成百上千个集成。
Upstash
模型上下文协议 (MCP) 是一种新的标准化协议,用于管理大型语言模型 (LLM) 和外部系统之间的上下文。 在此存储库中,我们提供了一个安装程序以及一个用于 Upstash Developer API 的 MCP 服务器。
Tecton MCP Server
提供一套工具,用于与 Tecton 集群交互、管理特征存储,并通过 Mission Control 协议执行 Tecton CLI 命令。
LSD MCP Server
该服务器通过 LSD SQL 语言的功能将 Claude AI 连接到互联网,将网络数据转换为可查询的类数据库结构,从而高效地与真实世界的数据进行交互。
Notion MCP Server
一个模型上下文协议服务器,可以将 Claude 和其他 AI 助手连接到您的 Notion 工作区,从而允许 AI 与数据库、页面和块进行交互。
Email Processing MCP Server
This is a complex task involving several steps and technologies. Here's a breakdown of the process and a conceptual outline of how you might implement it, along with considerations for each step: **1. Outlook Email Extraction and Date Filtering:** * **Technology:** Python with libraries like `win32com.client` (for Windows) or `imaplib` (cross-platform, but requires IMAP access enabled in Outlook). * **Process:** 1. **Connect to Outlook:** * **`win32com.client` (Windows):** This is the most direct way to access Outlook on Windows. ```python import win32com.client outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI") inbox = outlook.GetDefaultFolder(6) # 6 represents the Inbox messages = inbox.Items ``` * **`imaplib` (Cross-Platform):** Requires IMAP to be enabled in your Outlook account settings. This is more complex to set up but works on macOS and Linux. ```python import imaplib import email # Replace with your Outlook IMAP server and credentials imap_server = "outlook.office365.com" imap_user = "your_email@outlook.com" imap_password = "your_password" mail = imaplib.IMAP4_SSL(imap_server) mail.login(imap_user, imap_password) mail.select("inbox") result, data = mail.search(None, "ALL") # Or use specific search criteria for num in data[0].split(): result, data = mail.fetch(num, "(RFC822)") raw_email = data[0][1] email_message = email.message_from_bytes(raw_email) # Process email_message mail.close() mail.logout() ``` 2. **Date Filtering:** Iterate through the emails and check their sent/received dates. * **`win32com.client`:** ```python import datetime start_date = datetime.datetime(2023, 1, 1) # Example start date end_date = datetime.datetime(2023, 12, 31) # Example end date for message in messages: try: sent_date = message.SentOn.replace(tzinfo=None) # Remove timezone info for comparison if start_date <= sent_date <= end_date: # Process the email print(f"Subject: {message.Subject}, Sent: {sent_date}") except Exception as e: print(f"Error processing email: {e}") ``` * **`imaplib`:** You can use IMAP search criteria for date filtering *before* fetching the emails, which is more efficient. ```python from datetime import date, timedelta today = date.today() yesterday = today - timedelta(days=1) date_str = yesterday.strftime("%d-%b-%Y") # Format required by IMAP result, data = mail.search(None, f'SINCE "{date_str}"') # Search for emails since yesterday ``` 3. **Email Data Extraction:** Extract relevant information from each email: * Subject * Sender * Recipient(s) * Sent/Received Date * Body (plain text and/or HTML) * Attachments (optional) **2. SQLite Database Storage:** * **Technology:** Python's `sqlite3` library. * **Process:** 1. **Connect to SQLite:** ```python import sqlite3 conn = sqlite3.connect('emails.db') cursor = conn.cursor() ``` 2. **Create a Table:** ```python cursor.execute(''' CREATE TABLE IF NOT EXISTS emails ( id INTEGER PRIMARY KEY AUTOINCREMENT, subject TEXT, sender TEXT, recipient TEXT, sent_date DATETIME, body TEXT ) ''') conn.commit() ``` 3. **Insert Email Data:** ```python def insert_email(subject, sender, recipient, sent_date, body): cursor.execute(''' INSERT INTO emails (subject, sender, recipient, sent_date, body) VALUES (?, ?, ?, ?, ?) ''', (subject, sender, recipient, sent_date, body)) conn.commit() # Example usage: insert_email(message.Subject, message.SenderEmailAddress, message.To, message.SentOn, message.Body) ``` 4. **Close the Connection:** ```python conn.close() ``` **3. Vector Embedding Generation:** * **Technology:** A suitable embedding model (e.g., Sentence Transformers, OpenAI's embeddings API, Hugging Face Transformers) and a library to use it. * **Process:** 1. **Choose an Embedding Model:** * **Sentence Transformers:** Good for general-purpose semantic similarity. Easy to use and can run locally. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('all-mpnet-base-v2') # Choose a model ``` * **OpenAI Embeddings API:** High-quality embeddings, but requires an OpenAI API key and incurs costs. ```python import openai openai.api_key = "YOUR_OPENAI_API_KEY" def get_embedding(text, model="text-embedding-ada-002"): text = text.replace("\n", " ") return openai.Embedding.create(input = [text], model=model)['data'][0]['embedding'] ``` * **Hugging Face Transformers:** Offers a wide range of models, but can be more complex to set up. 2. **Generate Embeddings:** Create embeddings for the email body (or subject + body). ```python # Using Sentence Transformers email_body = message.Body # Or message.HTMLBody, but clean it first embedding = model.encode(email_body) # Using OpenAI email_body = message.Body embedding = get_embedding(email_body) ``` **4. MongoDB Storage with Vector Search:** * **Technology:** MongoDB with Atlas Vector Search. * **Process:** 1. **Connect to MongoDB:** ```python from pymongo import MongoClient # Replace with your MongoDB connection string client = MongoClient("mongodb+srv://<username>:<password>@<cluster>.mongodb.net/?retryWrites=true&w=majority") db = client["email_database"] collection = db["emails"] ``` 2. **Create a Vector Search Index (if it doesn't exist):** This is crucial for efficient vector search. You'll need to do this in the MongoDB Atlas UI or using the MongoDB shell. The index definition will depend on the embedding dimension of your chosen model. Example: ```javascript // MongoDB Shell db.emails.createIndex( { embedding: "vectorSearch", "indexConfig": { "dimensions": 768, // Replace with the dimension of your embedding model "similarity": "cosine", // Or "euclidean", "dotProduct" "vectorOptions": { "dataType": "float16", // Or "float32" "normalize": true } } }, { "name": "vectorSearchIndex" } ) ``` 3. **Store Email Data and Embeddings:** ```python def insert_email_mongo(subject, sender, recipient, sent_date, body, embedding): email_document = { "subject": subject, "sender": sender, "recipient": recipient, "sent_date": sent_date, "body": body, "embedding": embedding.tolist() # Convert NumPy array to list for MongoDB } collection.insert_one(email_document) # Example usage: insert_email_mongo(message.Subject, message.SenderEmailAddress, message.To, message.SentOn, message.Body, embedding) ``` **5. Semantic Search:** * **Process:** 1. **Get the Search Query:** The user enters a search query. 2. **Generate Embedding for the Query:** Use the *same* embedding model you used for the emails to generate an embedding for the search query. 3. **Perform Vector Search in MongoDB:** Use the `$vectorSearch` aggregation pipeline stage. ```python def search_emails(query, limit=10): query_embedding = model.encode(query).tolist() # Or get_embedding(query) if using OpenAI pipeline = [ { "$vectorSearch": { "index": "vectorSearchIndex", # The name of your vector search index "path": "embedding", "queryVector": query_embedding, "numCandidates": 100, # Adjust as needed "limit": limit } }, { "$project": { "_id": 0, "subject": 1, "body": 1, "score": { "$meta": "vectorSearchScore" } } } ] results = list(collection.aggregate(pipeline)) return results # Example usage: search_results = search_emails("meeting about project updates") for result in search_results: print(f"Subject: {result['subject']}, Score: {result['score']}") print(f"Body: {result['body'][:200]}...") # Print first 200 characters of the body ``` **Code Example (Illustrative - Requires Adaptation):** ```python import win32com.client import sqlite3 from sentence_transformers import SentenceTransformer from pymongo import MongoClient import datetime # --- Configuration --- SQLITE_DB_NAME = 'emails.db' MONGODB_CONNECTION_STRING = "mongodb+srv://<username>:<password>@<cluster>.mongodb.net/?retryWrites=true&w=majority" MONGODB_DATABASE_NAME = "email_database" MONGODB_COLLECTION_NAME = "emails" EMBEDDING_MODEL_NAME = 'all-mpnet-base-v2' # Or 'text-embedding-ada-002' if using OpenAI # --- Initialize Components --- outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI") inbox = outlook.GetDefaultFolder(6) messages = inbox.Items sqlite_conn = sqlite3.connect(SQLITE_DB_NAME) sqlite_cursor = sqlite_conn.cursor() model = SentenceTransformer(EMBEDDING_MODEL_NAME) # Or initialize OpenAI mongo_client = MongoClient(MONGODB_CONNECTION_STRING) mongo_db = mongo_client[MONGODB_DATABASE_NAME] mongo_collection = mongo_db[MONGODB_COLLECTION_NAME] # --- Database Setup --- sqlite_cursor.execute(''' CREATE TABLE IF NOT EXISTS emails ( id INTEGER PRIMARY KEY AUTOINCREMENT, subject TEXT, sender TEXT, recipient TEXT, sent_date DATETIME, body TEXT ) ''') sqlite_conn.commit() # --- Email Processing --- start_date = datetime.datetime(2023, 1, 1) end_date = datetime.datetime(2023, 12, 31) for message in messages: try: sent_date = message.SentOn.replace(tzinfo=None) if start_date <= sent_date <= end_date: subject = message.Subject sender = message.SenderEmailAddress recipient = message.To body = message.Body # 1. Store in SQLite sqlite_cursor.execute(''' INSERT INTO emails (subject, sender, recipient, sent_date, body) VALUES (?, ?, ?, ?, ?) ''', (subject, sender, recipient, sent_date, body)) sqlite_conn.commit() # 2. Generate Embedding embedding = model.encode(body) # Or use OpenAI # 3. Store in MongoDB email_document = { "subject": subject, "sender": sender, "recipient": recipient, "sent_date": sent_date, "body": body, "embedding": embedding.tolist() } mongo_collection.insert_one(email_document) print(f"Processed email: {subject}") except Exception as e: print(f"Error processing email: {e}") # --- Cleanup --- sqlite_conn.close() mongo_client.close() print("Email processing complete.") ``` **Key Considerations and Improvements:** * **Error Handling:** Robust error handling is crucial, especially when dealing with external APIs and data sources. Use `try...except` blocks extensively. * **Rate Limiting:** Be mindful of rate limits for the OpenAI API and other services. Implement delays or batch processing to avoid exceeding limits. * **Authentication:** Securely store and manage API keys and database credentials. Avoid hardcoding them directly in your script. Use environment variables or a configuration file. * **Data Cleaning:** Clean the email body before generating embeddings. Remove HTML tags, special characters, and irrelevant content. Consider using libraries like `BeautifulSoup` for HTML parsing. * **Scalability:** For large email datasets, consider using batch processing and asynchronous tasks to improve performance. Libraries like `Celery` or `Dask` can help. * **Incremental Updates:** Implement a mechanism to periodically check for new emails and update the databases and embeddings. You could use a scheduled task or a background process. * **Vector Search Index Optimization:** Experiment with different vector search index configurations (e.g., `numCandidates`, `similarity metric`) to optimize search performance. * **User Interface:** Create a user interface (e.g., using Flask, Django, or Streamlit) to allow users to enter search queries and view the results. * **Security:** If you're handling sensitive email data, implement appropriate security measures to protect the data at rest and in transit. * **Time Zones:** Pay close attention to time zones when filtering emails by date. Ensure that all dates are in the same time zone before comparing them. * **Attachment Handling:** If you need to process attachments, you'll need to extract them from the emails and store them separately. You could also generate embeddings for the attachment content. * **IMAP vs. Exchange Web Services (EWS):** For more advanced Outlook integration, consider using Exchange Web Services (EWS) instead of IMAP. EWS provides more features and better performance, but it's also more complex to set up. There are Python libraries like `exchangelib` that can help. This comprehensive outline should give you a solid foundation for building your email processing and semantic search application. Remember to adapt the code examples to your specific needs and environment. Good luck! **Simplified Chinese Translation of the Key Concepts:** Here's a translation of the key concepts into Simplified Chinese: * **Outlook Email Extraction:** Outlook 邮件提取 (Outlook yóujiàn tíqǔ) * **Date Filtering:** 日期过滤 (rìqí guòlǜ) * **SQLite Database:** SQLite 数据库 (SQLite shùjùkù) * **Vector Embeddings:** 向量嵌入 (xiàngliàng qiànrù) * **Semantic Search:** 语义搜索 (yǔyì sōusuǒ) * **MongoDB:** MongoDB * **Atlas Vector Search:** Atlas 向量搜索 (Atlas xiàngliàng sōusuǒ) * **Sentence Transformers:** 句子转换器 (jùzi zhuǎnhuànqì) * **OpenAI Embeddings API:** OpenAI 嵌入 API (OpenAI qiànrù API) * **Hugging Face Transformers:** Hugging Face 转换器 (Hugging Face zhuǎnhuànqì) * **Email Body:** 邮件正文 (yóujiàn zhèngwén) * **Search Query:** 搜索查询 (sōusuǒ cháxún) * **Aggregation Pipeline:** 聚合管道 (jùhé guǎndào) * **Index:** 索引 (suǒyǐn) * **Connection String:** 连接字符串 (liánjiē zìfúchuàn) * **Rate Limiting:** 速率限制 (sùlǜ xiànzhì) * **Data Cleaning:** 数据清洗 (shùjù qīngxǐ) * **Scalability:** 可扩展性 (kě kuòzhǎn xìng) * **Incremental Updates:** 增量更新 (zēngliàng gēngxīn) * **User Interface:** 用户界面 (yònghù jièmiàn) * **Security:** 安全性 (ānquán xìng) * **Time Zones:** 时区 (shíqū) * **Attachment Handling:** 附件处理 (fùjiàn chǔlǐ) * **Exchange Web Services (EWS):** Exchange Web 服务 (EWS)
Docker MCP Server
在 Docker 容器中促进隔离的代码执行,从而实现安全的多语言脚本执行,并通过模型上下文协议与 Claude 等语言模型集成。
Perplexity Tool for Claude Desktop
一个定制的 MCP 工具,它将 Perplexity AI 的 API 与 Claude Desktop 集成,使 Claude 能够进行基于网络的调研,并提供带有引用的答案。
Knowledge Base MCP Server
提供工具,用于使用语义搜索功能,从不同的知识库中列出和检索内容。
mcp-server-kubernetes
以下是用 TypeScript 实现 Kubernetes 集群操作,包括 Pod、Deployment 和 Service 的示例: **总览:** 这个翻译提供了一个概括性的描述,因为完整的代码实现会非常庞大。 它涵盖了使用 Kubernetes 客户端库(例如 `@kubernetes/client-node`)在 TypeScript 中执行常见操作的关键概念和步骤。 **翻译:** **Kubernetes 集群操作的 TypeScript 实现,用于 Pod、Deployment 和 Service。** **TypeScript 实现 Kubernetes 集群中 Pod、Deployment 和 Service 的操作。** **更详细的解释 (更适合作为文档或教程的标题):** **使用 TypeScript 操作 Kubernetes 集群:Pod、Deployment 和 Service 的实现示例。** **以下是一个更详细的解释,包含代码示例的结构 (需要更多上下文才能准确翻译):** **假设我们使用 `@kubernetes/client-node` 库。** **1. 安装依赖:** ```bash npm install @kubernetes/client-node ``` **2. 初始化 Kubernetes 客户端:** ```typescript import * as k8s from '@kubernetes/client-node'; const kc = new k8s.KubeConfig(); kc.loadFromDefault(); // 或者使用其他方式加载配置,例如 loadFromCluster() 或 loadFromString() const k8sApi = kc.makeApiClient(k8s.CoreV1Api); const k8sAppsApi = kc.makeApiClient(k8s.AppsV1Api); ``` **3. Pod 操作示例:** * **列出 Pod:** ```typescript async function listPods(namespace: string) { const res = await k8sApi.listNamespacedPod(namespace); console.log(res.body); } ``` * **创建 Pod:** ```typescript async function createPod(namespace: string, podManifest: k8s.V1Pod) { const res = await k8sApi.createNamespacedPod(namespace, podManifest); console.log(res.body); } ``` * **删除 Pod:** ```typescript async function deletePod(namespace: string, podName: string) { const res = await k8sApi.deleteNamespacedPod(podName, namespace); console.log(res.body); } ``` **4. Deployment 操作示例:** * **列出 Deployment:** ```typescript async function listDeployments(namespace: string) { const res = await k8sAppsApi.listNamespacedDeployment(namespace); console.log(res.body); } ``` * **创建 Deployment:** ```typescript async function createDeployment(namespace: string, deploymentManifest: k8s.V1Deployment) { const res = await k8sAppsApi.createNamespacedDeployment(namespace, deploymentManifest); console.log(res.body); } ``` * **更新 Deployment:** ```typescript async function updateDeployment(namespace: string, deploymentName: string, deploymentManifest: k8s.V1Deployment) { const res = await k8sAppsApi.replaceNamespacedDeployment(deploymentName, namespace, deploymentManifest); console.log(res.body); } ``` * **删除 Deployment:** ```typescript async function deleteDeployment(namespace: string, deploymentName: string) { const res = await k8sAppsApi.deleteNamespacedDeployment(deploymentName, namespace); console.log(res.body); } ``` **5. Service 操作示例:** * **列出 Service:** ```typescript async function listServices(namespace: string) { const res = await k8sApi.listNamespacedService(namespace); console.log(res.body); } ``` * **创建 Service:** ```typescript async function createService(namespace: string, serviceManifest: k8s.V1Service) { const res = await k8sApi.createNamespacedService(namespace, serviceManifest); console.log(res.body); } ``` * **更新 Service:** ```typescript async function updateService(namespace: string, serviceName: string, serviceManifest: k8s.V1Service) { const res = await k8sApi.replaceNamespacedService(serviceName, namespace, serviceManifest); console.log(res.body); } ``` * **删除 Service:** ```typescript async function deleteService(namespace: string, serviceName: string) { const res = await k8sApi.deleteNamespacedService(serviceName, namespace); console.log(res.body); } ``` **翻译 (对应上面的详细解释):** **使用 TypeScript 实现 Kubernetes 集群操作:Pod、Deployment 和 Service 的示例** **1. 安装依赖:** ```bash npm install @kubernetes/client-node ``` **2. 初始化 Kubernetes 客户端:** ```typescript import * as k8s from '@kubernetes/client-node'; const kc = new k8s.KubeConfig(); kc.loadFromDefault(); // 或者使用其他方式加载配置,例如 loadFromCluster() 或 loadFromString() const k8sApi = kc.makeApiClient(k8s.CoreV1Api); const k8sAppsApi = kc.makeApiClient(k8s.AppsV1Api); ``` **3. Pod 操作示例:** * **列出 Pod:** ```typescript async function listPods(namespace: string) { const res = await k8sApi.listNamespacedPod(namespace); console.log(res.body); } ``` * **创建 Pod:** ```typescript async function createPod(namespace: string, podManifest: k8s.V1Pod) { const res = await k8sApi.createNamespacedPod(namespace, podManifest); console.log(res.body); } ``` * **删除 Pod:** ```typescript async function deletePod(namespace: string, podName: string) { const res = await k8sApi.deleteNamespacedPod(podName, namespace); console.log(res.body); } ``` **4. Deployment 操作示例:** * **列出 Deployment:** ```typescript async function listDeployments(namespace: string) { const res = await k8sAppsApi.listNamespacedDeployment(namespace); console.log(res.body); } ``` * **创建 Deployment:** ```typescript async function createDeployment(namespace: string, deploymentManifest: k8s.V1Deployment) { const res = await k8sAppsApi.createNamespacedDeployment(namespace, deploymentManifest); console.log(res.body); } ``` * **更新 Deployment:** ```typescript async function updateDeployment(namespace: string, deploymentName: string, deploymentManifest: k8s.V1Deployment) { const res = await k8sAppsApi.replaceNamespacedDeployment(deploymentName, namespace, deploymentManifest); console.log(res.body); } ``` * **删除 Deployment:** ```typescript async function deleteDeployment(namespace: string, deploymentName: string) { const res = await k8sAppsApi.deleteNamespacedDeployment(deploymentName, namespace); console.log(res.body); } ``` **5. Service 操作示例:** * **列出 Service:** ```typescript async function listServices(namespace: string) { const res = await k8sApi.listNamespacedService(namespace); console.log(res.body); } ``` * **创建 Service:** ```typescript async function createService(namespace: string, serviceManifest: k8s.V1Service) { const res = await k8sApi.createNamespacedService(namespace, serviceManifest); console.log(res.body); } ``` * **更新 Service:** ```typescript async function updateService(namespace: string, serviceName: string, serviceManifest: k8s.V1Service) { const res = await k8sApi.replaceNamespacedService(serviceName, namespace, serviceManifest); console.log(res.body); } ``` * **删除 Service:** ```typescript async function deleteService(namespace: string, serviceName: string) { const res = await k8sApi.deleteNamespacedService(serviceName, namespace); console.log(res.body); } ``` **中文翻译:** **使用 TypeScript 实现 Kubernetes 集群操作:Pod、Deployment 和 Service 的示例** **1. 安装依赖:** ```bash npm install @kubernetes/client-node ``` **2. 初始化 Kubernetes 客户端:** ```typescript import * as k8s from '@kubernetes/client-node'; const kc = new k8s.KubeConfig(); kc.loadFromDefault(); // 或者使用其他方式加载配置,例如 loadFromCluster() 或 loadFromString() const k8sApi = kc.makeApiClient(k8s.CoreV1Api); const k8sAppsApi = kc.makeApiClient(k8s.AppsV1Api); ``` **3. Pod 操作示例:** * **列出 Pod:** ```typescript async function listPods(namespace: string) { const res = await k8sApi.listNamespacedPod(namespace); console.log(res.body); } ``` * **创建 Pod:** ```typescript async function createPod(namespace: string, podManifest: k8s.V1Pod) { const res = await k8sApi.createNamespacedPod(namespace, podManifest); console.log(res.body); } ``` * **删除 Pod:** ```typescript async function deletePod(namespace: string, podName: string) { const res = await k8sApi.deleteNamespacedPod(podName, namespace); console.log(res.body); } ``` **4. Deployment 操作示例:** * **列出 Deployment:** ```typescript async function listDeployments(namespace: string) { const res = await k8sAppsApi.listNamespacedDeployment(namespace); console.log(res.body); } ``` * **创建 Deployment:** ```typescript async function createDeployment(namespace: string, deploymentManifest: k8s.V1Deployment) { const res = await k8sAppsApi.createNamespacedDeployment(namespace, deploymentManifest); console.log(res.body); } ``` * **更新 Deployment:** ```typescript async function updateDeployment(namespace: string, deploymentName: string, deploymentManifest: k8s.V1Deployment) { const res = await k8sAppsApi.replaceNamespacedDeployment(deploymentName, namespace, deploymentManifest); console.log(res.body); } ``` * **删除 Deployment:** ```typescript async function deleteDeployment(namespace: string, deploymentName: string) { const res = await k8sAppsApi.deleteNamespacedDeployment(deploymentName, namespace); console.log(res.body); } ``` **5. Service 操作示例:** * **列出 Service:** ```typescript async function listServices(namespace: string) { const res = await k8sApi.listNamespacedService(namespace); console.log(res.body); } ``` * **创建 Service:** ```typescript async function createService(namespace: string, serviceManifest: k8s.V1Service) { const res = await k8sApi.createNamespacedService(namespace, serviceManifest); console.log(res.body); } ``` * **更新 Service:** ```typescript async function updateService(namespace: string, serviceName: string, serviceManifest: k8s.V1Service) { const res = await k8sApi.replaceNamespacedService(serviceName, namespace, serviceManifest); console.log(res.body); } ``` * **删除 Service:** ```typescript async function deleteService(namespace: string, serviceName: string) { const res = await k8sApi.deleteNamespacedService(serviceName, namespace); console.log(res.body); } ``` **解释:** * **使用 TypeScript 实现 Kubernetes 集群操作:Pod、Deployment 和 Service 的示例 (Shǐyòng TypeScript shíxiàn Kubernetes jíqún cāozuò: Pod, Deployment hé Service de shìlì):** This is the main title, indicating the purpose of the code. * **安装依赖 (Ānzhuāng yīlài):** "Install dependencies." * **初始化 Kubernetes 客户端 (Chūshǐhuà Kubernetes kèhùduān):** "Initialize the Kubernetes client." * **Pod 操作示例 (Pod cāozuò shìlì):** "Pod operation examples." * **列出 Pod (Lièchū Pod):** "List Pods." * **创建 Pod (Chuàngjiàn Pod):** "Create Pod." * **删除 Pod (Shānchú Pod):** "Delete Pod." * **Deployment 操作示例 (Deployment cāozuò shìlì):** "Deployment operation examples." * **列出 Deployment (Lièchū Deployment):** "List Deployments." * **创建 Deployment (Chuàngjiàn Deployment):** "Create Deployment." * **更新 Deployment (Gēngxīn Deployment):** "Update Deployment." * **删除 Deployment (Shānchú Deployment):** "Delete Deployment." * **Service 操作示例 (Service cāozuò shìlì):** "Service operation examples." * **列出 Service (Lièchū Service):** "List Services." * **创建 Service (Chuàngjiàn Service):** "Create Service." * **更新 Service (Gēngxīn Service):** "Update Service." * **删除 Service (Shānchú Service):** "Delete Service." This detailed translation provides a complete and accurate representation of the original English text, including the code examples. It's suitable for use in documentation, tutorials, or other educational materials. Remember to replace the placeholder comments with actual code.
Coder Toolbox MCP Server
一个实用工具包,通过提供无缝的 Java 代码分析、操作和测试工作流程工具,增强 Claude 的代码交互能力。
DeltaTask MCP Server
提供 API 接口,用于访问本地托管的任务管理系统,该系统具有创建、更新和组织任务的功能,包括支持紧急程度、工作量估算、子任务以及与 Obsidian Markdown 文件的双向同步。
Substack Reader
通过在 GitHub 上创建一个帐户,来为 pl728/substack-fetcher-mcp 的开发做出贡献。
mcp-solver
一个模型上下文协议(MCP)服务器,它向大型语言模型公开 MiniZinc 约束求解能力。