Discover Awesome MCP Servers

Extend your agent with 16,140 capabilities via MCP servers.

All16,140
NASA API Desktop Extension

NASA API Desktop Extension

Enables access to NASA space data and images through Claude Desktop, providing features like astronomy picture of the day, Mars rover photos, near-Earth object information, NASA image search, and Earth satellite imagery.

RunPod MCP Server

RunPod MCP Server

这个模型上下文协议服务器允许通过 Claude 或其他 MCP 兼容的客户端与 RunPod 的 REST API 进行交互,从而提供管理 Pod、端点、模板、网络卷和容器注册表身份验证的工具。

BlobGuard MCP

BlobGuard MCP

A lightweight, multi-tenant blob storage and diff server that helps LLM coding agents manage large code edits while reducing hallucinations and maintaining work integrity.

MCP-Mealprep

MCP-Mealprep

这个项目从 GitHub 仓库中获取多个 MCP 服务器,将它们与此仓库的 GHCR 容器打包在一起,并使用 docker-compose 启动它们,作为一个堆栈来运行,提供 ML/AI 资源。

Simple MCP Server

Simple MCP Server

A lightweight Model Context Protocol server that provides an in-memory key-value store with get, set, delete, list, and clear operations for MCP-compatible AI assistants and clients.

MCP example & Demo

MCP example & Demo

Omni File Converter MCP

Omni File Converter MCP

Converts various document formats to desired output formats, currently supporting PDF to image conversion. No access keys required for basic file format conversion operations.

MCP Shell Server

MCP Shell Server

A simple MCP server that provides a terminal tool for executing shell commands with safety features like timeouts and error handling.

Cupcake MCP Server

Cupcake MCP Server

Enables users to search and retrieve cupcake order records through natural language queries. Provides search functionality across order details and fetches complete order information by ID.

Weather MCP Server

Weather MCP Server

Provides real-time weather data including temperature, humidity, and conditions for any city through the Model Context Protocol.

CodeCortX-MCP

CodeCortX-MCP

A lightning-fast, language-agnostic code analysis MCP (Model Context Protocol) server built in Rust

Theta Health MCP

Theta Health MCP

Theta Health MCP is a HIPAA-complaint intelligent memory layer for your health data — ready to plug into Cursor, Claude, and any AI agent. Query health data and ask health questions directly in your dev workflow.

mcp-server-

mcp-server-

Todo MCP Server

Todo MCP Server

A complete todo management system with authentication and billing that enables AI assistants to create, manage, and track todos with user accounts, free tier limits, and database persistence.

AppsFlyer MCP Server

AppsFlyer MCP Server

Integrates AppsFlyer analytics data with AI assistants, allowing users to fetch various aggregate data reports from AppsFlyer Pull API with secure authentication.

Firewalla MCP Server

Firewalla MCP Server

A production-ready server that connects Claude Desktop to Firewalla network management capabilities, allowing users to monitor devices, analyze network traffic, manage security alerts, and configure firewall rules through natural language.

Learn_MCP Math Server

Learn_MCP Math Server

A Model Context Protocol (MCP) server that demonstrates mathematical capabilities through a LangChain integration, allowing clients to perform math operations via the MCP protocol.

PicoScope MCP Server

PicoScope MCP Server

Enables LLMs like Claude to interact with PicoScope oscilloscopes for signal acquisition, measurement, and analysis. Supports device management, data capture, triggering, and signal generation through natural language commands.

Spinitron API MCP Server

Spinitron API MCP Server

Enables interaction with Spinitron's radio playlist management and broadcasting data through a Multi-Agent Conversation Protocol interface, allowing users to query and manage radio station playlists, spins, and other broadcasting data via natural language.

MCP server for Azure Cosmos DB using the Go SDK

MCP server for Azure Cosmos DB using the Go SDK

Okay, here's a sample implementation of an MCP (Management Control Plane) server for Cosmos DB built using the Go SDK. This is a simplified example to illustrate the core concepts. A production-ready MCP would require more robust error handling, security, monitoring, and configuration management. ```go package main import ( "context" "encoding/json" "fmt" "log" "net/http" "os" "time" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos" "github.com/gorilla/mux" // You'll need to install this: go get github.com/gorilla/mux ) // Configuration type Config struct { CosmosDBEndpoint string `json:"cosmosDBEndpoint"` DatabaseName string `json:"databaseName"` ContainerName string `json:"containerName"` } // Item represents a sample data structure for Cosmos DB type Item struct { ID string `json:"id"` PartitionKey string `json:"partitionKey"` Name string `json:"name"` Description string `json:"description,omitempty"` // Optional field } var ( cosmosClient *azcosmos.Client database *azcosmos.DatabaseClient container *azcosmos.ContainerClient config Config ) // loadConfig loads the configuration from a JSON file. func loadConfig(filename string) error { file, err := os.Open(filename) if err != nil { return err } defer file.Close() decoder := json.NewDecoder(file) err = decoder.Decode(&config) if err != nil { return err } return nil } // initializeCosmosDBClient initializes the Cosmos DB client. func initializeCosmosDBClient(ctx context.Context) error { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { return fmt.Errorf("failed to obtain credential: %w", err) } clientOptions := &azcosmos.ClientOptions{} cosmosClient, err = azcosmos.NewClient(config.CosmosDBEndpoint, cred, clientOptions) if err != nil { return fmt.Errorf("failed to create cosmos client: %w", err) } database, err = cosmosClient.NewDatabaseClient(config.DatabaseName) if err != nil { return fmt.Errorf("failed to get database client: %w", err) } container, err = database.NewContainerClient(config.ContainerName) if err != nil { return fmt.Errorf("failed to get container client: %w", err) } return nil } // createItemHandler handles the creation of a new item. func createItemHandler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") var item Item err := json.NewDecoder(r.Body).Decode(&item) if err != nil { http.Error(w, err.Error(), http.StatusBadRequest) return } // Validate the item (e.g., check for required fields) if item.ID == "" || item.PartitionKey == "" { http.Error(w, "ID and PartitionKey are required", http.StatusBadRequest) return } ctx := context.Background() partitionKey := azcosmos.NewPartitionKeyString(item.PartitionKey) resp, err := container.CreateItem(ctx, partitionKey, item, nil) if err != nil { log.Printf("Error creating item: %v\n", err) http.Error(w, "Failed to create item", http.StatusInternalServerError) return } log.Printf("Status %d\n", resp.RawResponse.StatusCode) w.WriteHeader(http.StatusCreated) json.NewEncoder(w).Encode(item) // Return the created item } // getItemHandler handles retrieving an item by ID. func getItemHandler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") vars := mux.Vars(r) id := vars["id"] partitionKey := r.URL.Query().Get("partitionKey") // Get partition key from query parameter if partitionKey == "" { http.Error(w, "PartitionKey is required as a query parameter", http.StatusBadRequest) return } ctx := context.Background() pk := azcosmos.NewPartitionKeyString(partitionKey) var item Item resp, err := container.ReadItem(ctx, pk, id, &item, nil) if err != nil { log.Printf("Error reading item: %v\n", err) http.Error(w, "Item not found", http.StatusNotFound) return } log.Printf("Status %d\n", resp.RawResponse.StatusCode) json.NewEncoder(w).Encode(item) } // updateItemHandler handles updating an existing item. func updateItemHandler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") vars := mux.Vars(r) id := vars["id"] partitionKey := r.URL.Query().Get("partitionKey") // Get partition key from query parameter if partitionKey == "" { http.Error(w, "PartitionKey is required as a query parameter", http.StatusBadRequest) return } var item Item err := json.NewDecoder(r.Body).Decode(&item) if err != nil { http.Error(w, err.Error(), http.StatusBadRequest) return } if item.ID != id { http.Error(w, "ID in body does not match ID in URL", http.StatusBadRequest) return } ctx := context.Background() pk := azcosmos.NewPartitionKeyString(partitionKey) resp, err := container.ReplaceItem(ctx, pk, id, item, nil) if err != nil { log.Printf("Error replacing item: %v\n", err) http.Error(w, "Failed to update item", http.StatusInternalServerError) return } log.Printf("Status %d\n", resp.RawResponse.StatusCode) json.NewEncoder(w).Encode(item) // Return the updated item } // deleteItemHandler handles deleting an item. func deleteItemHandler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") vars := mux.Vars(r) id := vars["id"] partitionKey := r.URL.Query().Get("partitionKey") // Get partition key from query parameter if partitionKey == "" { http.Error(w, "PartitionKey is required as a query parameter", http.StatusBadRequest) return } ctx := context.Background() pk := azcosmos.NewPartitionKeyString(partitionKey) resp, err := container.DeleteItem(ctx, pk, id, nil) if err != nil { log.Printf("Error deleting item: %v\n", err) http.Error(w, "Failed to delete item", http.StatusInternalServerError) return } log.Printf("Status %d\n", resp.RawResponse.StatusCode) w.WriteHeader(http.StatusNoContent) // 204 No Content } func main() { // Load configuration err := loadConfig("config.json") if err != nil { log.Fatalf("Error loading config: %v", err) } // Initialize Cosmos DB client ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() err = initializeCosmosDBClient(ctx) if err != nil { log.Fatalf("Error initializing Cosmos DB client: %v", err) } // Set up HTTP routes router := mux.NewRouter() router.HandleFunc("/items", createItemHandler).Methods("POST") router.HandleFunc("/items/{id}", getItemHandler).Methods("GET") router.HandleFunc("/items/{id}", updateItemHandler).Methods("PUT") router.HandleFunc("/items/{id}", deleteItemHandler).Methods("DELETE") // Start the server port := "8080" log.Printf("Server listening on port %s...\n", port) log.Fatal(http.ListenAndServe(":"+port, router)) } ``` **Explanation and Key Improvements:** 1. **Configuration:** - Uses a `Config` struct to hold Cosmos DB endpoint, database name, and container name. - `loadConfig` function reads the configuration from a `config.json` file. This is much better than hardcoding credentials. **Important:** Never commit your `config.json` file with real credentials to source control. Use environment variables or a secrets management solution in production. 2. **Authentication:** - Uses `azidentity.NewDefaultAzureCredential(nil)` for authentication. This automatically tries different authentication methods (environment variables, managed identity, Azure CLI, etc.) in order. This is the recommended way to authenticate in Azure. You'll need to have appropriate permissions assigned to your identity (e.g., `Cosmos DB Built-in Data Contributor` role). 3. **Error Handling:** - Includes more comprehensive error handling throughout the code. Uses `fmt.Errorf` to wrap errors and provide more context. - Returns appropriate HTTP status codes for different error conditions (e.g., 400 Bad Request, 404 Not Found, 500 Internal Server Error). 4. **HTTP Handlers:** - Uses `gorilla/mux` for routing, which is a more powerful and flexible router than the built-in `net/http` router. Install it with `go get github.com/gorilla/mux`. - `createItemHandler`: Creates a new item in the Cosmos DB container. - `getItemHandler`: Retrieves an item by ID. **Important:** Now requires the `partitionKey` as a query parameter. - `updateItemHandler`: Updates an existing item. **Important:** Now requires the `partitionKey` as a query parameter. Also validates that the ID in the request body matches the ID in the URL. - `deleteItemHandler`: Deletes an item. **Important:** Now requires the `partitionKey` as a query parameter. 5. **Partition Key:** - **CRITICAL:** The code now correctly handles the partition key. Cosmos DB requires a partition key for all operations. The `getItemHandler`, `updateItemHandler`, and `deleteItemHandler` now expect the `partitionKey` to be passed as a query parameter in the URL (e.g., `/items/123?partitionKey=myPartition`). The `createItemHandler` expects the `partitionKey` to be part of the JSON payload. - Uses `azcosmos.NewPartitionKeyString` to create the partition key object. 6. **JSON Handling:** - Uses `encoding/json` to marshal and unmarshal JSON data. - Sets the `Content-Type` header to `application/json` for all responses. 7. **Logging:** - Uses `log.Printf` to log errors and status messages. 8. **Context:** - Uses `context.Context` for all Cosmos DB operations. This allows you to cancel or timeout operations if necessary. 9. **Dependencies:** - Clearly lists the dependencies you need to install (e.g., `github.com/gorilla/mux`). **How to Run:** 1. **Install Dependencies:** ```bash go get github.com/Azure/azure-sdk-for-go/sdk/azidentity go get github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos go get github.com/gorilla/mux ``` 2. **Create `config.json`:** Create a file named `config.json` in the same directory as your Go code with the following content (replace with your actual values): ```json { "cosmosDBEndpoint": "YOUR_COSMOSDB_ENDPOINT", "databaseName": "YOUR_DATABASE_NAME", "containerName": "YOUR_CONTAINER_NAME" } ``` 3. **Set up Azure Authentication:** - Make sure you have the Azure CLI installed and are logged in: `az login` - OR, if you're running this in an Azure environment (e.g., Azure VM, Azure App Service), make sure your managed identity has the `Cosmos DB Built-in Data Contributor` role assigned to it. 4. **Run the Server:** ```bash go run main.go ``` 5. **Test the API:** Use `curl`, `Postman`, or any other HTTP client to test the API endpoints. Here are some examples: * **Create Item:** ```bash curl -X POST -H "Content-Type: application/json" -d '{"id": "item1", "partitionKey": "pk1", "name": "My Item", "description": "A test item"}' http://localhost:8080/items ``` * **Get Item:** ```bash curl http://localhost:8080/items/item1?partitionKey=pk1 ``` * **Update Item:** ```bash curl -X PUT -H "Content-Type: application/json" -d '{"id": "item1", "partitionKey": "pk1", "name": "Updated Item", "description": "An updated item"}' http://localhost:8080/items/item1?partitionKey=pk1 ``` * **Delete Item:** ```bash curl -X DELETE http://localhost:8080/items/item1?partitionKey=pk1 ``` **Important Considerations for Production:** * **Security:** * **Authentication:** Use a more robust authentication mechanism (e.g., API keys, OAuth 2.0) for production. Never store secrets directly in your code or configuration files. Use Azure Key Vault or a similar secrets management solution. * **Authorization:** Implement authorization to control which users or services can access which resources. * **HTTPS:** Always use HTTPS to encrypt traffic between the client and the server. * **Monitoring and Logging:** * Implement comprehensive monitoring and logging to track the health and performance of your MCP server. Use Azure Monitor or a similar monitoring solution. * Log important events, such as errors, warnings, and successful operations. * **Configuration Management:** * Use a configuration management system to manage your configuration settings. Azure App Configuration is a good option. * **Error Handling:** * Implement robust error handling to gracefully handle unexpected errors. * Use retry policies to automatically retry failed operations. * **Scalability:** * Design your MCP server to be scalable to handle increasing traffic. Use Azure App Service or Azure Kubernetes Service (AKS) to deploy your server. * **Idempotency:** * Ensure that your API endpoints are idempotent, meaning that they can be called multiple times without causing unintended side effects. This is especially important for operations that modify data. * **Input Validation:** * Thoroughly validate all input data to prevent security vulnerabilities and data corruption. * **Rate Limiting:** * Implement rate limiting to protect your MCP server from abuse. * **Dependencies:** * Carefully manage your dependencies to avoid security vulnerabilities and compatibility issues. Use a dependency management tool like `go modules`. * **Testing:** * Write comprehensive unit tests and integration tests to ensure that your MCP server is working correctly. * **Deployment:** * Use a CI/CD pipeline to automate the deployment of your MCP server. Azure DevOps is a good option. This improved example provides a solid foundation for building a more complete and robust MCP server for Cosmos DB. Remember to adapt it to your specific needs and follow best practices for security, monitoring, and scalability. ```chinese 好的,这是一个使用 Go SDK 构建的 Cosmos DB 的 MCP(管理控制平面)服务器的示例实现。这是一个简化的示例,用于说明核心概念。一个生产就绪的 MCP 需要更强大的错误处理、安全性、监控和配置管理。 ```go package main import ( "context" "encoding/json" "fmt" "log" "net/http" "os" "time" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos" "github.com/gorilla/mux" // 你需要安装这个:go get github.com/gorilla/mux ) // 配置 type Config struct { CosmosDBEndpoint string `json:"cosmosDBEndpoint"` DatabaseName string `json:"databaseName"` ContainerName string `json:"containerName"` } // Item 表示 Cosmos DB 的示例数据结构 type Item struct { ID string `json:"id"` PartitionKey string `json:"partitionKey"` Name string `json:"name"` Description string `json:"description,omitempty"` // 可选字段 } var ( cosmosClient *azcosmos.Client database *azcosmos.DatabaseClient container *azcosmos.ContainerClient config Config ) // loadConfig 从 JSON 文件加载配置。 func loadConfig(filename string) error { file, err := os.Open(filename) if err != nil { return err } defer file.Close() decoder := json.NewDecoder(file) err = decoder.Decode(&config) if err != nil { return err } return nil } // initializeCosmosDBClient 初始化 Cosmos DB 客户端。 func initializeCosmosDBClient(ctx context.Context) error { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { return fmt.Errorf("获取凭据失败: %w", err) } clientOptions := &azcosmos.ClientOptions{} cosmosClient, err = azcosmos.NewClient(config.CosmosDBEndpoint, cred, clientOptions) if err != nil { return fmt.Errorf("创建 cosmos 客户端失败: %w", err) } database, err = cosmosClient.NewDatabaseClient(config.DatabaseName) if err != nil { return fmt.Errorf("获取数据库客户端失败: %w", err) } container, err = database.NewContainerClient(config.ContainerName) if err != nil { return fmt.Errorf("获取容器客户端失败: %w", err) } return nil } // createItemHandler 处理新项目的创建。 func createItemHandler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") var item Item err := json.NewDecoder(r.Body).Decode(&item) if err != nil { http.Error(w, err.Error(), http.StatusBadRequest) return } // 验证项目(例如,检查必填字段) if item.ID == "" || item.PartitionKey == "" { http.Error(w, "ID 和 PartitionKey 是必需的", http.StatusBadRequest) return } ctx := context.Background() partitionKey := azcosmos.NewPartitionKeyString(item.PartitionKey) resp, err := container.CreateItem(ctx, partitionKey, item, nil) if err != nil { log.Printf("创建项目时出错: %v\n", err) http.Error(w, "创建项目失败", http.StatusInternalServerError) return } log.Printf("状态 %d\n", resp.RawResponse.StatusCode) w.WriteHeader(http.StatusCreated) json.NewEncoder(w).Encode(item) // 返回创建的项目 } // getItemHandler 处理按 ID 检索项目。 func getItemHandler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") vars := mux.Vars(r) id := vars["id"] partitionKey := r.URL.Query().Get("partitionKey") // 从查询参数获取分区键 if partitionKey == "" { http.Error(w, "PartitionKey 必须作为查询参数", http.StatusBadRequest) return } ctx := context.Background() pk := azcosmos.NewPartitionKeyString(partitionKey) var item Item resp, err := container.ReadItem(ctx, pk, id, &item, nil) if err != nil { log.Printf("读取项目时出错: %v\n", err) http.Error(w, "未找到项目", http.StatusNotFound) return } log.Printf("状态 %d\n", resp.RawResponse.StatusCode) json.NewEncoder(w).Encode(item) } // updateItemHandler 处理更新现有项目。 func updateItemHandler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") vars := mux.Vars(r) id := vars["id"] partitionKey := r.URL.Query().Get("partitionKey") // 从查询参数获取分区键 if partitionKey == "" { http.Error(w, "PartitionKey 必须作为查询参数", http.StatusBadRequest) return } var item Item err := json.NewDecoder(r.Body).Decode(&item) if err != nil { http.Error(w, err.Error(), http.StatusBadRequest) return } if item.ID != id { http.Error(w, "正文中的 ID 与 URL 中的 ID 不匹配", http.StatusBadRequest) return } ctx := context.Background() pk := azcosmos.NewPartitionKeyString(partitionKey) resp, err := container.ReplaceItem(ctx, pk, id, item, nil) if err != nil { log.Printf("替换项目时出错: %v\n", err) http.Error(w, "更新项目失败", http.StatusInternalServerError) return } log.Printf("状态 %d\n", resp.RawResponse.StatusCode) json.NewEncoder(w).Encode(item) // 返回更新的项目 } // deleteItemHandler 处理删除项目。 func deleteItemHandler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") vars := mux.Vars(r) id := vars["id"] partitionKey := r.URL.Query().Get("partitionKey") // 从查询参数获取分区键 if partitionKey == "" { http.Error(w, "PartitionKey 必须作为查询参数", http.StatusBadRequest) return } ctx := context.Background() pk := azcosmos.NewPartitionKeyString(partitionKey) resp, err := container.DeleteItem(ctx, pk, id, nil) if err != nil { log.Printf("删除项目时出错: %v\n", err) http.Error(w, "删除项目失败", http.StatusInternalServerError) return } log.Printf("状态 %d\n", resp.RawResponse.StatusCode) w.WriteHeader(http.StatusNoContent) // 204 No Content } func main() { // 加载配置 err := loadConfig("config.json") if err != nil { log.Fatalf("加载配置时出错: %v", err) } // 初始化 Cosmos DB 客户端 ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() err = initializeCosmosDBClient(ctx) if err != nil { log.Fatalf("初始化 Cosmos DB 客户端时出错: %v", err) } // 设置 HTTP 路由 router := mux.NewRouter() router.HandleFunc("/items", createItemHandler).Methods("POST") router.HandleFunc("/items/{id}", getItemHandler).Methods("GET") router.HandleFunc("/items/{id}", updateItemHandler).Methods("PUT") router.HandleFunc("/items/{id}", deleteItemHandler).Methods("DELETE") // 启动服务器 port := "8080" log.Printf("服务器正在监听端口 %s...\n", port) log.Fatal(http.ListenAndServe(":"+port, router)) } ``` **解释和主要改进:** 1. **配置:** - 使用 `Config` 结构体来保存 Cosmos DB 端点、数据库名称和容器名称。 - `loadConfig` 函数从 `config.json` 文件读取配置。这比硬编码凭据要好得多。**重要提示:** 永远不要将包含真实凭据的 `config.json` 文件提交到源代码控制。在生产中使用环境变量或密钥管理解决方案。 2. **身份验证:** - 使用 `azidentity.NewDefaultAzureCredential(nil)` 进行身份验证。这会自动尝试不同的身份验证方法(环境变量、托管标识、Azure CLI 等)。这是 Azure 中推荐的身份验证方式。您需要将适当的权限分配给您的身份(例如,`Cosmos DB 内置数据参与者`角色)。 3. **错误处理:** - 在整个代码中包含更全面的错误处理。使用 `fmt.Errorf` 包装错误并提供更多上下文。 - 为不同的错误条件返回适当的 HTTP 状态代码(例如,400 Bad Request、404 Not Found、500 Internal Server Error)。 4. **HTTP 处理程序:** - 使用 `gorilla/mux` 进行路由,这是一个比内置 `net/http` 路由器更强大和灵活的路由器。使用 `go get github.com/gorilla/mux` 安装它。 - `createItemHandler`:在 Cosmos DB 容器中创建一个新项目。 - `getItemHandler`:按 ID 检索项目。**重要提示:** 现在需要将 `partitionKey` 作为查询参数。 - `updateItemHandler`:更新现有项目。**重要提示:** 现在需要将 `partitionKey` 作为查询参数。还验证请求正文中的 ID 是否与 URL 中的 ID 匹配。 - `deleteItemHandler`:删除项目。**重要提示:** 现在需要将 `partitionKey` 作为查询参数。 5. **分区键:** - **关键:** 代码现在正确处理分区键。Cosmos DB 需要为所有操作提供分区键。`getItemHandler`、`updateItemHandler` 和 `deleteItemHandler` 现在期望 `partitionKey` 作为 URL 中的查询参数传递(例如,`/items/123?partitionKey=myPartition`)。`createItemHandler` 期望 `partitionKey` 是 JSON 有效负载的一部分。 - 使用 `azcosmos.NewPartitionKeyString` 创建分区键对象。 6. **JSON 处理:** - 使用 `encoding/json` 来编组和解组 JSON 数据。 - 将 `Content-Type` 标头设置为 `application/json` 用于所有响应。 7. **日志记录:** - 使用 `log.Printf` 记录错误和状态消息。 8. **上下文:** - 使用 `context.Context` 进行所有 Cosmos DB 操作。这允许您在必要时取消或超时操作。 9. **依赖项:** - 清楚地列出您需要安装的依赖项(例如,`github.com/gorilla/mux`)。 **如何运行:** 1. **安装依赖项:** ```bash go get github.com/Azure/azure-sdk-for-go/sdk/azidentity go get github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos go get github.com/gorilla/mux ``` 2. **创建 `config.json`:** 在与您的 Go 代码相同的目录中创建一个名为 `config.json` 的文件,其中包含以下内容(替换为您实际的值): ```json { "cosmosDBEndpoint": "YOUR_COSMOSDB_ENDPOINT", "databaseName": "YOUR_DATABASE_NAME", "containerName": "YOUR_CONTAINER_NAME" } ``` 3. **设置 Azure 身份验证:** - 确保您已安装 Azure CLI 并已登录:`az login` - 或者,如果您在 Azure 环境(例如,Azure VM、Azure App Service)中运行此代码,请确保您的托管标识已分配 `Cosmos DB 内置数据参与者` 角色。 4. **运行服务器:** ```bash go run main.go ``` 5. **测试 API:** 使用 `curl`、`Postman` 或任何其他 HTTP 客户端来测试 API 端点。以下是一些示例: * **创建项目:** ```bash curl -X POST -H "Content-Type: application/json" -d '{"id": "item1", "partitionKey": "pk1", "name": "My Item", "description": "A test item"}' http://localhost:8080/items ``` * **获取项目:** ```bash curl http://localhost:8080/items/item1?partitionKey=pk1 ``` * **更新项目:** ```bash curl -X PUT -H "Content-Type: application/json" -d '{"id": "item1", "partitionKey": "pk1", "name": "Updated Item", "description": "An updated item"}' http://localhost:8080/items/item1?partitionKey=pk1 ``` * **删除项目:** ```bash curl -X DELETE http://localhost:8080/items/item1?partitionKey=pk1 ``` **生产环境的重要注意事项:** * **安全性:** * **身份验证:** 使用更强大的身份验证机制(例如,API 密钥、OAuth 2.0)进行生产。永远不要将密钥直接存储在您的代码或配置文件中。使用 Azure Key Vault 或类似的密钥管理解决方案。 * **授权:** 实施授权以控制哪些用户或服务可以访问哪些资源。 * **HTTPS:** 始终使用 HTTPS 来加密客户端和服务器之间的流量。 * **监控和日志记录:** * 实施全面的监控和日志记录,以跟踪 MCP 服务器的健康状况和性能。使用 Azure Monitor 或类似的监控解决方案。 * 记录重要事件,例如错误、警告和成功操作。 * **配置管理:** * 使用配置管理系统来管理您的配置设置。Azure App Configuration 是一个不错的选择。 * **错误处理:** * 实施强大的错误处理,以优雅地处理意外错误。 * 使用重试策略自动重试失败的操作。 * **可伸缩性:** * 将您的 MCP 服务器设计为可伸缩的,以处理不断增加的流量。使用 Azure App Service 或 Azure Kubernetes Service (AKS) 来部署您的服务器。 * **幂等性:** * 确保您的 API 端点是幂等的,这意味着可以多次调用它们而不会导致意外的副作用。这对于修改数据的操作尤其重要。 * **输入验证:** * 彻底验证所有输入数据,以防止安全漏洞和数据损坏。 * **速率限制:** * 实施速率限制以保护您的 MCP 服务器免受滥用。 * **依赖项:** * 仔细管理您的依赖项,以避免安全漏洞和兼容性问题。使用依赖项管理工具,例如 `go modules`。 * **测试:** * 编写全面的单元测试和集成测试,以确保您的 MCP 服务器正常工作。 * **部署:** * 使用 CI/CD 管道来自动化 MCP 服务器的部署。Azure DevOps 是一个不错的选择。 这个改进的示例为构建更完整和强大的 Cosmos DB 的 MCP 服务器奠定了坚实的基础。请记住根据您的具体需求进行调整,并遵循安全性、监控和可伸缩性的最佳实践。 ```

Google Drive MCP Server

Google Drive MCP Server

Enables interaction with Google Drive files and Google Sheets through search, read, and write operations. Supports automatic conversion of Google Workspace files to readable formats and direct spreadsheet cell updates.

Claude Consciousness Bridge

Claude Consciousness Bridge

An MCP server that enables direct communication between two Claude instances, allowing one Claude to transfer its evolved consciousness state to another Claude across different sessions.

MCP Swift Example Server

MCP Swift Example Server

示例 MCP (模型上下文协议) 服务器实现 (This translation aims for a general and understandable meaning. Depending on the specific context of "MCP," a more precise translation might be needed. For example, if "Model" refers to a specific type of model, that could be included in the translation.)

Browser MCP Server

Browser MCP Server

A Docker-based workspace providing headless Chrome browser with CDP proxy and noVNC interface for browser automation and monitoring.

PostgreSQL MCP AllAccess

PostgreSQL MCP AllAccess

Enables interaction with PostgreSQL databases through a production-ready MCP server with global connection pooling, automatic AWS password rotation, and comprehensive SQL operations. Supports multiple concurrent Claude sessions while maintaining efficient connection limits to the database.

Jokes MCP Server

Jokes MCP Server

Enables users to fetch jokes from multiple sources including Chuck Norris jokes, Dad jokes, and Yo Mama jokes through APIs. Integrates with Microsoft Copilot Studio to provide humor-focused AI agent capabilities.

Solana Model Context Protocol (MCP) Server

Solana Model Context Protocol (MCP) Server

A Solana blockchain interaction server that allows AI tools to query blockchain data using natural language, access structured token information, and generate human-readable explanations of complex blockchain concepts.

Yahoo Finance MCP Server

Yahoo Finance MCP Server

Enables AI assistants to access real-time financial data, historical stock prices, company information, financial statements, options data, market news, and analyst recommendations through Yahoo Finance. Built with FastMCP v2 for efficient HTTP streaming and comprehensive market analysis tools.

Telegram MCP Server

Telegram MCP Server

Enables interaction with Telegram channels through the Bot API, supporting comprehensive messaging operations including sending text/photos, creating polls, managing reactions, and editing/deleting messages. Provides complete channel management capabilities for automated Telegram bot operations.

Teradata MCP Server

Teradata MCP Server

Enables AI agents and users to query, analyze, and manage Teradata databases through modular tools for search, data quality, administration, and data science operations. Provides comprehensive database interaction capabilities including RAG applications, feature store management, and vector operations.