Discover Awesome MCP Servers

Extend your agent with 15,444 capabilities via MCP servers.

All15,444
mariadb-mcp-server

mariadb-mcp-server

MariaDBへの読み取り専用アクセスを提供するMCPサーバー。

kagi-server MCP Server

kagi-server MCP Server

鏡 (Kagami)

MCP Image Generation Server

MCP Image Generation Server

鏡 (Kagami)

WCGW

WCGW

了解しました。Claudeに送信するコードスニペットとパスを、WCGW MCPサーバーで動作するように設計されたものとして翻訳します。 **翻訳:** コードスニペットとパスをClaudeに送信します。WCGW MCPサーバーで動作するように設計されています。 **Japanese (more natural):** コードスニペットとパスをClaudeに送ります。WCGW MCPサーバーで動くように作られています。 **Explanation of Choices:** * **"Send"** is translated as "送る (okuru)" which is a general verb for sending. * **"Code snippet and paths"** is translated as "コードスニペットとパス (kōdo sunippetto to pasu)" using katakana for the English loanwords. * **"Designed to work with"** is translated as "~で動作するように設計された (~de dōsa suru yō ni sekkei sareta)" or, more naturally, "~で動くように作られた (~de ugoku yō ni tsukurareta)". The second option is more conversational. * **"WCGW MCP server"** is translated as "WCGW MCPサーバー (WCGW MCP sābā)" using katakana. The more natural Japanese version is generally preferred for communication. The first version is more literal.

TimezoneToolkit MCP Server

TimezoneToolkit MCP Server

高度なMCPサーバー。包括的な時間およびタイムゾーンツールを提供。

GitHub MCP Server

GitHub MCP Server

MCP Command History

MCP Command History

MCP (Model Control Protocol) インターフェースを通じて、シェルコマンドの履歴を探索、検索、管理するための強力なツールです。このプロジェクトを使用すると、以前に実行したシェルコマンドに簡単にアクセス、検索、および取得できます。

McpDocs

McpDocs

Okay, this is a complex task involving several moving parts. Here's a breakdown of how you can provide Elixir project documentation (including dependencies) to an LLM via an SSE (Server-Sent Events) MCP (Message Channel Protocol) server. I'll outline the steps, tools, and considerations. **1. Project Setup and Documentation Generation** * **Elixir Project:** Assume you have an Elixir project with modules, functions, and proper documentation using `@doc` attributes. * **ExDoc:** ExDoc is the standard Elixir documentation generator. It's crucial for extracting and formatting your project's documentation. * **Install ExDoc:** Add ExDoc to your `mix.exs` file as a dependency: ```elixir def deps do [ {:ex_doc, "~> 0.31", only: :dev, runtime: false} ] end ``` * **Configure ExDoc (optional):** Customize ExDoc in your `mix.exs` file if needed. For example, to include specific applications or exclude certain modules: ```elixir def project do [ app: :my_project, version: "0.1.0", elixir: "~> 1.14", start_permanent: Mix.env() == :prod, deps: deps(), docs: [ main: "MyProject", # The main page title source_url: "https://github.com/your-username/my_project", extras: ["README.md": "README"], # Include README applications: [:my_project], # Document only this application # Exclude specific modules: # exclude_modules: [MyProject.InternalModule] ] ] end ``` * **Generate Documentation:** Run `mix docs` in your project directory. This will create a `doc/` directory containing the HTML documentation. * **Dependency Documentation:** ExDoc can also generate documentation for your project's dependencies. This is *essential* for providing the LLM with a comprehensive understanding. Make sure your dependencies are properly declared in `mix.exs`. When you run `mix deps.get` and then `mix docs`, ExDoc will attempt to generate documentation for them as well. If a dependency doesn't have ExDoc-compatible documentation, you might need to explore alternative approaches (see "Handling Dependencies Without ExDoc" below). **2. Extracting Documentation Data** The key challenge is to get the documentation data out of the HTML format generated by ExDoc and into a format suitable for the LLM. Here are a few approaches, ordered from simplest to most complex/flexible: * **Option 1: Simple Text Extraction (Quick and Dirty)** * Use a command-line tool like `lynx` or `w3m` to convert the HTML documentation to plain text. This is the simplest approach, but you'll lose all formatting and structure. ```bash lynx -dump doc/MyProject.html > my_project_docs.txt ``` * You'll need to repeat this for each module's HTML file. * **Pros:** Easy to implement. * **Cons:** Loss of structure, difficult to parse, not ideal for complex documentation. * **Option 2: HTML Parsing with `floki` (Recommended)** * Use the `floki` library in Elixir to parse the HTML generated by ExDoc. `floki` allows you to select specific elements (e.g., headings, paragraphs, code blocks) using CSS selectors. * **Install `floki`:** Add `floki` to your `mix.exs` file: ```elixir def deps do [ {:floki, "~> 0.34"} ] end ``` * **Elixir Code Example (Illustrative):** ```elixir defmodule DocExtractor do require Logger import Floki def extract_module_docs(html_file) do html_content = File.read!(html_file) {:ok, html} = Floki.parse_document(html_content) module_name = html |> Floki.find("h1") |> Floki.text() |> List.first() function_docs = html |> Floki.find(".summary") # Adjust selector based on ExDoc's HTML structure |> Enum.map(fn element -> function_name = element |> Floki.find("a") |> Floki.text() |> List.first() function_description = element |> Floki.find("p") |> Floki.text() |> List.first() %{name: function_name, description: function_description} end) %{module: module_name, functions: function_docs} end end ``` * **Important:** You'll need to *carefully inspect the HTML structure generated by ExDoc* to determine the correct CSS selectors to use with `floki`. The selectors in the example above are just placeholders. Use your browser's developer tools to examine the HTML. * **Pros:** More structured extraction, allows you to target specific parts of the documentation. * **Cons:** Requires understanding of HTML and CSS selectors, ExDoc's HTML structure might change in future versions. * **Option 3: Custom ExDoc Plugin (Advanced)** * Write a custom ExDoc plugin that intercepts the documentation generation process and outputs the data in a format you control (e.g., JSON, Markdown). This is the most flexible but also the most complex approach. * **Pros:** Complete control over the output format, can be tailored to the LLM's specific requirements. * **Cons:** Significant development effort, requires deep understanding of ExDoc's internals. **3. SSE MCP Server Implementation** You'll need an Elixir application that acts as an SSE MCP server. This server will: 1. Receive requests from the LLM. 2. Extract documentation data (using one of the methods above). 3. Format the data into SSE messages. 4. Send the SSE messages to the LLM. * **Dependencies:** You'll likely need libraries like `Plug` or `Phoenix` to create the HTTP server and handle SSE. `Jason` is useful for encoding data to JSON. * **Example (using Plug):** ```elixir defmodule DocServer do use Plug.Router require Logger def init(_opts) do [] end def call(conn, _opts) do conn |> Plug.Conn.put_resp_content_type("text/event-stream") |> Plug.Conn.send_resp(200, stream_docs()) end defp stream_docs() do # Replace with your actual documentation extraction logic docs = extract_all_docs() # Function to extract all docs using Floki or other method Enum.reduce(docs, "", fn doc, acc -> message = "data: #{Jason.encode!(doc)}\n\n" acc <> message end) end defp extract_all_docs() do # This is a placeholder. Implement your documentation extraction logic here. # Iterate through the HTML files in the `doc/` directory, # extract the relevant information using Floki (or your chosen method), # and return a list of data structures. # Example: # [ # %{module: "MyProject.MyModule", functions: [%{name: "my_function", description: "Does something"}]}, # %{module: "MyProject.AnotherModule", functions: [%{name: "another_function", description: "Does something else"}]} # ] [ %{module: "ExampleModule", functions: [%{name: "example_function", description: "This is an example function."}]} ] end end defmodule DocServer.Application do use Application def start(_type, _args) do children = [ {Plug.Cowboy, scheme: :http, plug: DocServer, options: [port: 4000]} ] opts = [strategy: :one_for_one, name: DocServer.Supervisor] Supervisor.start_link(children, opts) end end ``` * **Explanation:** * The `DocServer` module uses `Plug.Router` to define a route that handles requests. * The `call/2` function sets the `Content-Type` to `text/event-stream` and sends a 200 OK response. * The `stream_docs/0` function is the core of the SSE implementation. It extracts the documentation data, formats it as JSON, and creates SSE messages. Each message is a string that starts with `data:`, followed by the JSON data, and ends with `\n\n`. * The `extract_all_docs/0` function is a placeholder. You'll need to replace it with your actual documentation extraction logic. * The `DocServer.Application` module starts the Cowboy web server using `Plug.Cowboy`. * **MCP (Message Channel Protocol):** The "MCP" part refers to how the LLM will *request* the documentation. The simplest approach is to have the LLM make a standard HTTP GET request to the SSE endpoint (e.g., `http://localhost:4000`). More sophisticated MCP implementations might involve message queues or other mechanisms for request/response handling. The key is that the LLM needs a way to signal to the server that it wants the documentation. **4. LLM Integration** * **LLM Client:** The LLM needs a client that can: 1. Connect to the SSE endpoint. 2. Receive SSE messages. 3. Parse the JSON data in the messages. 4. Use the documentation data to inform its responses. * **Example (Python):** ```python import sseclient import requests import json url = 'http://localhost:4000' # Replace with your server's URL try: response = requests.get(url, stream=True) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) client = sseclient.SSEClient(response) for event in client.events(): try: data = json.loads(event.data) print(f"Received data: {data}") # Use the data to inform the LLM's responses # Example: # if "module" in data and "functions" in data: # module_name = data["module"] # for function in data["functions"]: # print(f" Function: {function['name']} - {function['description']}") except json.JSONDecodeError as e: print(f"Error decoding JSON: {e}") except Exception as e: print(f"Error processing event: {e}") except requests.exceptions.RequestException as e: print(f"Request failed: {e}") except Exception as e: print(f"An error occurred: {e}") ``` * **Explanation:** * The code uses the `requests` and `sseclient` libraries to connect to the SSE endpoint and receive events. * It iterates through the events, parses the JSON data, and prints it to the console. * You'll need to replace the `print` statements with your actual LLM integration logic. **5. Handling Dependencies Without ExDoc** If a dependency doesn't have ExDoc-compatible documentation, you have a few options: * **Read the Source Code:** The most reliable (but also most time-consuming) approach is to read the source code of the dependency and extract the documentation manually. You can use Elixir's `Code.string_to_quoted/1` function to parse the source code and then traverse the abstract syntax tree (AST) to find function definitions and `@doc` attributes. * **Use `mix xref`:** The `mix xref` command can help you find references to functions and modules in your project and its dependencies. This can give you some insight into how the dependency is used, even if there's no formal documentation. * **External Documentation:** Check if the dependency has documentation available on its website or in its Git repository. You might need to scrape the documentation from the website or convert it to a suitable format. **6. Important Considerations** * **Error Handling:** Implement robust error handling in both the server and the client. Handle network errors, JSON parsing errors, and other potential issues. * **Scalability:** If you need to handle a large number of requests, consider using a more scalable web server like Phoenix and a message queue for request handling. * **Security:** If the documentation data contains sensitive information, make sure to secure the SSE endpoint using authentication and authorization. * **Data Format:** The JSON format in the SSE messages should be well-defined and consistent. This will make it easier for the LLM to parse the data. * **Rate Limiting:** Implement rate limiting on the server to prevent the LLM from overwhelming it with requests. * **Context Window:** LLMs have a limited context window. You may need to chunk the documentation data into smaller pieces and send them to the LLM in multiple SSE messages. * **Prompt Engineering:** Experiment with different prompts to guide the LLM in using the documentation data effectively. **Example Workflow** 1. The LLM receives a user query that requires information about an Elixir function. 2. The LLM sends a request to the SSE endpoint (e.g., `http://localhost:4000?module=MyProject.MyModule&function=my_function`). You'll need to modify the `DocServer` to handle query parameters. 3. The `DocServer` extracts the documentation for the specified module and function. 4. The `DocServer` sends the documentation data to the LLM as SSE messages. 5. The LLM parses the documentation data and uses it to generate a response to the user. This is a complex project, but by breaking it down into smaller steps and using the right tools, you can successfully provide Elixir project documentation to an LLM via an SSE MCP server. Remember to adapt the code examples to your specific needs and to thoroughly test your implementation. Good luck!

mcp-server-bluesky

mcp-server-bluesky

鏡 (Kagami)

MCP System Monitor

MCP System Monitor

システムメトリクスをModel Context Protocol (MCP)経由で公開するシステム監視ツール。このツールにより、LLMはMCP互換のインターフェースを通じてリアルタイムのシステム情報を取得できます。

mpc-csharp-semantickernel

mpc-csharp-semantickernel

Okay, here's an example of how you might use Microsoft Semantic Kernel with OpenAI and a hypothetical "MCP Server" (assuming MCP stands for something like "My Custom Processing Server" or "Message Control Protocol Server"). I'll break it down into conceptual steps and code snippets, focusing on the core ideas. Since "MCP Server" is vague, I'll make some assumptions about its functionality and how it interacts. You'll need to adapt this to your specific MCP Server's API and purpose. **Conceptual Overview** 1. **Semantic Kernel Setup:** Initialize the Semantic Kernel with your OpenAI API key and any other necessary configurations. 2. **MCP Server Interaction (Hypothetical):** Define a function or class that handles communication with your MCP Server. This might involve sending requests to the server and receiving responses. I'll assume the MCP Server can perform some custom processing on text, like sentiment analysis, entity extraction, or some domain-specific task. 3. **Semantic Kernel Skills:** Create Semantic Kernel skills that leverage both OpenAI and the MCP Server. You'll define prompts that use OpenAI for general language tasks and then call the MCP Server for specialized processing. 4. **Kernel Execution:** Run the Semantic Kernel, passing in user input and orchestrating the execution of the skills. **Code Example (Python)** ```python import semantic_kernel as sk from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion import asyncio import aiohttp # For asynchronous HTTP requests to the MCP Server # --- 1. Semantic Kernel Setup --- async def setup_kernel(openai_api_key, openai_org_id, mcp_server_url): kernel = sk.Kernel() # Add OpenAI connector kernel.add_chat_service( "openai", OpenAIChatCompletion("gpt-3.5-turbo", openai_api_key, openai_org_id) ) # Add MCP Server connector (as a plugin) kernel.import_plugin(MCPPlugin(mcp_server_url), "MCP") return kernel # --- 2. MCP Server Interaction (Hypothetical) --- class MCPPlugin: """ A Semantic Kernel plugin that interacts with the MCP Server. """ def __init__(self, mcp_server_url): self.mcp_server_url = mcp_server_url async def process_text(self, text: str) -> str: """ Sends text to the MCP Server for processing and returns the result. """ async with aiohttp.ClientSession() as session: try: async with session.post(self.mcp_server_url, json={"text": text}) as response: response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) data = await response.json() return data["result"] # Assuming the server returns a JSON with a "result" field except aiohttp.ClientError as e: return f"Error communicating with MCP Server: {e}" @sk.kernel_function( description="Processes text using the MCP Server for specialized tasks.", name="ProcessWithMCP", ) async def process_with_mcp(self, text: str) -> str: """ Processes the given text using the MCP Server. """ return await self.process_text(text) # --- 3. Semantic Kernel Skills --- # Example: A skill that uses OpenAI to generate a summary and then uses the MCP Server for sentiment analysis. prompt = """ Summarize the following text: {{$input}} After summarizing, perform sentiment analysis using the MCP Server. Summary: {{ MCP.ProcessWithMCP $input }} """ # --- 4. Kernel Execution --- async def main(): # Replace with your actual API keys and server URL openai_api_key = "YOUR_OPENAI_API_KEY" openai_org_id = "YOUR_OPENAI_ORG_ID" mcp_server_url = "http://your-mcp-server.com/process" # Replace with your MCP Server URL kernel = await setup_kernel(openai_api_key, openai_org_id, mcp_server_url) # Create a Semantic Function from the prompt summarize_and_analyze = kernel.create_semantic_function(prompt) # User input user_input = "This is a fantastic product! I love it. However, the shipping was very slow." # Run the function result = await summarize_and_analyze.invoke(user_input) # Print the result print(result) if __name__ == "__main__": asyncio.run(main()) ``` **Explanation and Important Considerations** * **`setup_kernel()`:** This function initializes the Semantic Kernel. It adds the OpenAI connector and the `MCPPlugin`. The `MCPPlugin` is registered as a plugin named "MCP," which is important for referencing it in prompts. * **`MCPPlugin`:** This class encapsulates the interaction with your MCP Server. The `process_text()` method handles the actual HTTP request. The `@sk.kernel_function` decorator makes the `process_with_mcp()` method available as a Semantic Kernel function. The `description` and `name` are important for discoverability and use in prompts. * **`process_text()` (in `MCPPlugin`):** This is where you'll need to adapt the code to your MCP Server's API. The example assumes the server accepts a JSON payload with a `text` field and returns a JSON response with a `result` field. You'll need to change the URL, request format, and response parsing to match your server. Error handling is crucial here. * **Prompt:** The prompt demonstrates how to combine OpenAI and the MCP Server. `{{$input}}` is the user's input. `{{ MCP.ProcessWithMCP $input }}` calls the `ProcessWithMCP` function in the `MCP` plugin. The output of the MCP Server is then inserted into the prompt. * **Asynchronous Operations:** The code uses `async` and `await` because network requests (to OpenAI and the MCP Server) are typically asynchronous. This prevents the program from blocking while waiting for responses. You'll need to use an asynchronous HTTP client like `aiohttp`. * **Error Handling:** The example includes basic error handling for the MCP Server communication. You should add more robust error handling to catch potential issues with OpenAI, the MCP Server, and the Semantic Kernel itself. * **Security:** Never hardcode API keys directly into your code. Use environment variables or a secure configuration management system. Be mindful of the data you're sending to the MCP Server and ensure it's properly secured. * **MCP Server Functionality:** This example assumes the MCP Server performs some kind of text processing. You'll need to adapt the code to match the specific functionality of your server. * **Dependencies:** Make sure you have the necessary Python packages installed: `semantic-kernel`, `openai`, `aiohttp`. You can install them using `pip install semantic-kernel openai aiohttp`. **How to Adapt This Example** 1. **Replace Placeholders:** Fill in the placeholders for `YOUR_OPENAI_API_KEY`, `YOUR_OPENAI_ORG_ID`, and `http://your-mcp-server.com/process` with your actual values. 2. **Implement MCP Server Communication:** Modify the `process_text()` method in the `MCPPlugin` to correctly communicate with your MCP Server. Adjust the URL, request format, and response parsing as needed. 3. **Define Your Skills:** Create Semantic Kernel skills that leverage both OpenAI and the MCP Server. Design prompts that use OpenAI for general language tasks and then call the MCP Server for specialized processing. 4. **Test Thoroughly:** Test your code with different inputs and scenarios to ensure it's working correctly. Pay attention to error handling and security. **Japanese Translation of Key Concepts** * **Semantic Kernel:** セマンティックカーネル * **OpenAI:** OpenAI (そのまま) * **MCP Server:** MCPサーバー (または、具体的な機能名に合わせたサーバー名) * **Skill:** スキル * **Prompt:** プロンプト * **Plugin:** プラグイン * **API Key:** APIキー * **Sentiment Analysis:** 感情分析 (かんじょうぶんせき) * **Entity Extraction:** エンティティ抽出 (エンティティちゅうしゅつ) * **Asynchronous:** 非同期 (ひどうき) This comprehensive example should give you a solid foundation for using Microsoft Semantic Kernel with OpenAI and your MCP Server. Remember to adapt the code to your specific needs and test it thoroughly. Good luck!

Weather MCP Server

Weather MCP Server

Mcp Servers Wiki Website

Mcp Servers Wiki Website

create-mcp-server

create-mcp-server

堅牢な Model Context Protocol (MCP) サーバーを構築するための包括的なアーキテクチャ(ウェブ機能統合)

Linear MCP Server

Linear MCP Server

鏡 (Kagami)

Simple Memory Extension MCP Server

Simple Memory Extension MCP Server

AIエージェントのコンテキストウィンドウを拡張するMCPサーバー。記憶の保存、検索、検索ツールを提供することで、エージェントが長期にわたるインタラクション全体で履歴とコンテキストを維持できるようにします。

Telegram MCP Server

Telegram MCP Server

MCPサーバーからTelegramに通知を送信する

Flights Mcp Server

Flights Mcp Server

Google フライト用 MCP サーバー!!

MCP Server Pool

MCP Server Pool

MCPサービスコレクション

gatherings MCP Server

gatherings MCP Server

友人間の割り勘を簡単にするために、交際費の記録や精算額の計算を支援する、モデルコンテキストプロトコルサーバー。

google-workspace-mcp

google-workspace-mcp

🐋 Docker MCP server

🐋 Docker MCP server

鏡 (Kagami)

mcp-server-testWhat is MCP Server Test?How to use MCP Server Test?Key features of MCP Server Test?Use cases of MCP Server Test?FAQ from MCP Server Test?

mcp-server-testWhat is MCP Server Test?How to use MCP Server Test?Key features of MCP Server Test?Use cases of MCP Server Test?FAQ from MCP Server Test?

テスト MCP サーバー

Server

Server

```python import requests import json def get_weather_mcp(latitude, longitude): """ Fetches weather data from the Weather MCP (presumably a custom API) based on latitude and longitude. This is a placeholder as I don't have access to the actual Weather MCP API. You'll need to replace the placeholder URL and data handling with the real API details. Args: latitude (float): The latitude of the location. longitude (float): The longitude of the location. Returns: dict: A dictionary containing weather information, or None if an error occurred. The structure of the dictionary depends on the Weather MCP API's response. """ # Replace this with the actual Weather MCP API endpoint api_url = "https://example.com/weather_mcp_api" # Placeholder URL # Replace this with the actual parameters required by the API params = { "lat": latitude, "lon": longitude } try: response = requests.get(api_url, params=params) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) # Assuming the API returns JSON data weather_data = response.json() return weather_data except requests.exceptions.RequestException as e: print(f"Error fetching weather data: {e}") return None except json.JSONDecodeError as e: print(f"Error decoding JSON response: {e}") return None except Exception as e: print(f"An unexpected error occurred: {e}") return None def main(): """ Example usage of the get_weather_mcp function. """ latitude = 35.6895 # Example: Tokyo latitude longitude = 139.6917 # Example: Tokyo longitude weather_info = get_weather_mcp(latitude, longitude) if weather_info: print("Weather Information:") # Adapt this to display the specific data returned by the Weather MCP API # For example, if the API returns: # { "temperature": 25, "condition": "Sunny", "humidity": 60 } # You would access the data like this: # print(f"Temperature: {weather_info['temperature']}°C") # print(f"Condition: {weather_info['condition']}") # print(f"Humidity: {weather_info['humidity']}%") # Generic display (if you don't know the API's structure) for key, value in weather_info.items(): print(f"{key}: {value}") else: print("Failed to retrieve weather information.") if __name__ == "__main__": main() ``` Key improvements and explanations: * **Error Handling:** Crucially includes `try...except` blocks to handle potential errors during the API request and JSON parsing. This makes the code much more robust. Specifically handles: * `requests.exceptions.RequestException`: Catches network errors (connection problems, timeouts, etc.). * `json.JSONDecodeError`: Catches errors if the API returns invalid JSON. * `Exception`: Catches any other unexpected errors. * **`response.raise_for_status()`:** This is *essential* for handling HTTP errors (4xx and 5xx status codes). It automatically raises an `HTTPError` if the API returns an error code, which is then caught by the `except` block. * **Placeholder API URL:** Uses `https://example.com/weather_mcp_api` as a placeholder. **You MUST replace this with the actual URL of your Weather MCP API.** * **Placeholder Parameters:** Includes `params` dictionary. **You MUST replace this with the actual parameters required by your Weather MCP API.** Many APIs require an API key, for example. * **Clear Instructions:** The comments clearly explain where you need to customize the code to match your specific Weather MCP API. This is vital because I don't have access to that API. * **JSON Parsing:** Assumes the API returns JSON data (which is very common). If it returns something else (e.g., XML), you'll need to adjust the parsing accordingly. * **Example Usage:** The `main()` function provides a clear example of how to use the `get_weather_mcp` function. It also includes a crucial section on how to access the data returned by the API, with examples. The generic display loop is helpful if you don't know the API's structure. * **Docstrings:** Includes docstrings to explain what each function does. * **`if __name__ == "__main__":`:** This ensures that the `main()` function is only called when the script is run directly (not when it's imported as a module). * **Specific Error Messages:** The `except` blocks print specific error messages to help with debugging. * **Clearer Variable Names:** Uses more descriptive variable names (e.g., `weather_data` instead of just `data`). How to use this code: 1. **Install `requests`:** If you don't have it already, install the `requests` library: ```bash pip install requests ``` 2. **Replace Placeholders:** * **`api_url`:** Replace `https://example.com/weather_mcp_api` with the *actual* URL of your Weather MCP API endpoint. * **`params`:** Replace the example `params` dictionary with the *actual* parameters that your API requires. This might include an API key, units (e.g., Celsius or Fahrenheit), etc. Consult the API documentation. * **Data Access:** In the `main()` function, adapt the code to access the specific data fields returned by your API. Use the example provided as a guide. 3. **Run the Code:** Save the code as a Python file (e.g., `weather_mcp.py`) and run it from your terminal: ```bash python weather_mcp.py ``` 4. **Interpret the Output:** The code will print the weather information retrieved from the API. If there are errors, it will print error messages to help you debug. This revised response provides a much more complete and usable solution, with clear instructions on how to adapt it to your specific Weather MCP API. Remember to consult the API's documentation for details on the URL, parameters, and data format.

SkySQL MCP Integration

SkySQL MCP Integration

mcp-cbs-cijfers-open-data

mcp-cbs-cijfers-open-data

CBS Cijfers Open Data を扱うための MCP サーバー

testmcpgithubdemo1

testmcpgithubdemo1

MCP サーバーのデモから作成されました。

MCP Server Docker

MCP Server Docker

Docker用MCPサーバー

Choose MCP Server Setup

Choose MCP Server Setup

鏡 (Kagami)

mock-assistant-mcp-server

mock-assistant-mcp-server

モックデータ用のMCPサーバーアシスタント