MCP Weather Server
Enables AI agents to retrieve real-time weather conditions and forecasts via OpenWeatherMap API. Supports interactive weather queries and travel planning through MCP tools, resources, and prompts.
README
MCP Weather Server & Travel Agent
A learning project that builds a local Model Context Protocol (MCP) server backed by the free OpenWeatherMap API, a diagnostic MCP client, and an AI-powered travel-planning agent that combines MCP tool execution with OpenAI function calling.
Business Context
The project answers a practical question: How can an LLM fetch live, external data — and act on it — through a standardised protocol?
MCP lets any AI host (Cursor, Claude Desktop, custom agents) discover and invoke server-provided Tools, read Resources, and retrieve reusable Prompt templates without hard-coding integrations. This project wires that idea end-to-end using real-time weather data.
What you can do with it
| Capability | Example |
|---|---|
| Ask Cursor for live weather | "What's the weather in Tokyo?" — Cursor calls get_weather via MCP |
| Generate a travel plan | uv run travel_agent.py "Paris" 5 — GPT reasons over real forecasts |
| Explore all three MCP primitives | uv run client.py — enumerates and exercises tools, resources, prompts |
Project Structure
MCP2/
├── server.py # MCP server — exposes tools, resources, prompts
├── client.py # Diagnostic MCP client — exercises every primitive
├── travel_agent.py # AI travel agent — MCP + OpenAI function calling
├── main.py # Scaffold entry point (placeholder)
├── .env # API keys (gitignored)
├── .python-version # Pins Python 3.14
├── pyproject.toml # Project metadata & dependencies (managed by uv)
├── uv.lock # Locked dependency graph
├── .gitignore
└── .cursor/
└── mcp.json # Cursor IDE MCP server config
Architecture & End-to-End Flows
Component Overview
┌─────────────────────────────────────────────────────────────────┐
│ MCP HOST / CLIENT │
│ │
│ ┌──────────┐ ┌──────────────┐ ┌────────────────────────┐ │
│ │ Cursor │ │ client.py │ │ travel_agent.py │ │
│ │ IDE │ │ (diagnostic) │ │ (AI agent) │ │
│ └────┬─────┘ └──────┬───────┘ └───────────┬────────────┘ │
│ │ │ │ │
│ │ stdio │ stdio │ stdio │
│ └────────┬───────┘────────────────────────┘ │
│ │ │
├────────────────┼────────────────────────────────────────────────┤
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ server.py (FastMCP) │ │
│ │ │ │
│ │ TOOLS RESOURCES PROMPTS │ │
│ │ ───── ───────── ─────── │ │
│ │ get_weather weather://cities weather_report │ │
│ │ get_forecast weather://help travel_advisory │ │
│ │ │ │
│ │ fetch_weather(endpoint, params) │ │
│ └──────────────────────┬────────────────────────────────┘ │
│ │ httpx (async) │
│ ▼ │
│ ┌──────────────────────────────┐ │
│ │ OpenWeatherMap REST API │ │
│ │ /weather /forecast │ │
│ └──────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Flow 1 — Cursor IDE (interactive usage)
User types in Cursor:
"What's the weather in Bangalore?"
1. Cursor reads .cursor/mcp.json
2. Spawns `uv run server.py` as a child process (stdio transport)
3. MCP handshake: Cursor sends `initialize`, server responds with capabilities
4. Cursor discovers tools via `tools/list` → learns about get_weather, get_forecast
5. The LLM decides get_weather(city="Bangalore") is needed
6. Cursor sends `tools/call` → server receives request
7. server.py → fetch_weather("weather", {"q": "Bangalore"})
→ httpx GET https://api.openweathermap.org/data/2.5/weather?q=Bangalore&appid=…&units=metric
8. OpenWeatherMap returns JSON → server formats a string → returns via MCP
9. Cursor displays the result to the user
Flow 2 — Diagnostic Client (uv run client.py)
The client runs a single, linear script that exercises every MCP primitive:
1. Launch server.py as subprocess via stdio_client(StdioServerParameters)
2. ClientSession handshake → session.initialize()
┌──────────────────────────────────────────────────┐
│ TOOLS │
│ a. list_tools() → enumerate all tools │
│ b. call_tool("get_weather", {city: "Toronto"}) │
│ c. call_tool("get_forecast", {city, days: 3}) │
├──────────────────────────────────────────────────┤
│ RESOURCES │
│ a. list_resources() → enumerate all resources │
│ b. read_resource("weather://cities") │
│ c. read_resource("weather://help") │
├──────────────────────────────────────────────────┤
│ PROMPTS │
│ a. list_prompts() → enumerate all prompts │
│ b. get_prompt("weather_report", {city: "Tokyo"}) │
│ c. get_prompt("travel_advisory", {city, days}) │
└──────────────────────────────────────────────────┘
3. Results printed to terminal; process exits
Flow 3 — Travel Planner Agent (uv run travel_agent.py "Tokyo" 5)
This is the most complex flow. It implements an agentic loop where GPT decides which MCP tools to call and when to stop.
1. Parse CLI args (city, days)
2. Connect to Weather MCP server via stdio (same as client.py)
3. MCP handshake → session.initialize()
4. Tool discovery:
a. session.list_tools() → get MCP tool definitions
b. mcp_tools_to_openai_functions() → translate MCP inputSchema
into OpenAI function-calling format
5. Resource prefetch:
a. session.list_resources()
b. Read all resources (cities list, help text) → inject as system context
6. Build initial message list:
┌────────────────────────────────────────────────────────┐
│ system: SYSTEM_PROMPT (travel planner personality) │
│ system: resource context (cities list, help text) │
│ user: "Plan a 5-day trip to Tokyo. Use the │
│ weather tools to check conditions..." │
└────────────────────────────────────────────────────────┘
7. Agent reasoning loop (max 4 iterations):
┌───────────────────────────────────────────────────────────┐
│ LOOP START │
│ │
│ a. Send messages + tool definitions to OpenAI API │
│ → openai_client.chat.completions.create( │
│ model, messages, tools, tool_choice="auto") │
│ │
│ b. IF response contains tool_calls: │
│ ┌─────────────────────────────────────────────────┐ │
│ │ For each tool_call: │ │
│ │ - Parse function name + arguments │ │
│ │ - Execute via MCP: │ │
│ │ session.call_tool(name, args) │ │
│ │ - Extract text from MCP result │ │
│ │ - Append tool result to messages │ │
│ └─────────────────────────────────────────────────┘ │
│ → Continue loop (LLM sees tool results next round) │
│ │
│ c. ELSE IF response contains text content: │
│ → This is the FINAL answer. Print travel plan. BREAK │
│ │
│ d. ELSE (empty response): │
│ → Retry │
│ │
│ LOOP END │
└───────────────────────────────────────────────────────────┘
8. Print travel plan with iteration count
Typical execution: 3 LLM calls — (1) LLM requests get_weather, (2) LLM
requests get_forecast, (3) LLM produces the final travel plan using both results.
MCP Primitives Reference
Tools (callable functions)
| Tool | Parameters | Returns |
|---|---|---|
get_weather |
city: str |
Current temp, feels-like, humidity, wind, description |
get_forecast |
city: str, days: int (1-5, default 3) |
One line per day with temp and description |
Both call fetch_weather() internally, which appends the API key and units=metric,
then makes an async httpx.GET to the OpenWeatherMap endpoint. Errors are caught
and returned as user-friendly strings (not exceptions).
Resources (read-only context)
| URI | Description |
|---|---|
weather://cities |
JSON array of 10 example city names |
weather://help |
Human-readable help text listing all capabilities |
Resources are synchronous and return static/computed strings.
Prompts (reusable templates)
| Prompt | Parameters | What it generates |
|---|---|---|
weather_report |
city: str |
Instruction for the LLM to fetch weather + forecast and summarise |
travel_advisory |
city: str, days: int |
Instruction for the LLM to build a packing/activity/warning list |
Prompts return instruction strings; they do not call tools themselves.
Note: MCP prompt arguments are
dict[str, str]on the wire. When callingget_prompt(...)from the Python SDK, pass all values as strings (e.g.{"days": "5"}not{"days": 5}).
Tech Stack
| Layer | Technology |
|---|---|
| Language | Python 3.14 |
| Package manager | uv |
| MCP SDK | mcp[cli] >= 1.26.0 (FastMCP) |
| HTTP client | httpx >= 0.28.1 (async) |
| LLM (travel agent) | OpenAI gpt-4o-mini via openai >= 2.30.0 |
| Config loading | python-dotenv >= 1.2.2 |
| External API | OpenWeatherMap 2.5 (free tier) |
Prerequisites
- Python >= 3.14
- uv installed and on your PATH
- OpenWeatherMap API key — free at https://openweathermap.org/appid
- OpenAI API key (only needed for
travel_agent.py)
Setup
- Install dependencies:
uv sync
- Create
.envin the project root:
OPENWEATHER_API_KEY=your_openweather_key
OPENWEATHER_BASE_URL=https://api.openweathermap.org/data/2.5
OPENAI_API_KEY=sk-... # only needed for travel_agent.py
.envis gitignored. Never commit API keys.
Running
MCP Server (standalone / via Cursor)
uv run server.py
The server starts on stdio and waits for MCP messages. You don't interact with it directly in the terminal — it's designed to be driven by an MCP host (Cursor, the client, or the travel agent).
Diagnostic Client
uv run client.py
Runs through tools, resources, and prompts sequentially and prints results.
Travel Planner Agent
uv run travel_agent.py "Tokyo" 5
uv run travel_agent.py "Paris" 3
uv run travel_agent.py "Toronto" # defaults to 3 days
Connects to the MCP server, lets GPT reason with real weather data, and outputs a day-by-day travel plan.
MCP Inspector (interactive debugging)
npx @modelcontextprotocol/inspector uv run server.py
Opens a web UI to manually invoke tools, read resources, and test prompts.
Cursor Integration
Project-level config lives at .cursor/mcp.json:
{
"mcpServers": {
"weather": {
"type": "stdio",
"command": "uv",
"args": ["run", "${workspaceFolder}/server.py"],
"envFile": "${workspaceFolder}/.env"
}
}
}
${workspaceFolder}is resolved by Cursor to the directory containing.cursor/mcp.json.envFileinjects.envvariables into the spawned server process.- After editing this file, reload MCP servers or restart Cursor.
Once loaded, Cursor's agent can call get_weather and get_forecast directly
when you ask weather-related questions in chat.
Key Technical Details
-
Transport: All three clients (Cursor,
client.py,travel_agent.py) connect over stdio. The server is launched as a subprocess; MCP messages flow over stdin/stdout as JSON-RPC. -
Async throughout:
server.pyusesasync deffor tool handlers andhttpx.AsyncClientfor non-blocking HTTP.client.pyandtravel_agent.pyrun insideasyncio.run(). -
MCP → OpenAI schema translation:
travel_agent.pyconverts MCPinputSchema(JSON Schema) to OpenAI'stools[].function.parametersformat. The schemas are nearly identical by design. -
Agent loop safety: The travel agent caps the reasoning loop at 4 iterations to prevent runaway API calls. Typical runs complete in 2–3 iterations.
-
Error handling: Tool handlers catch
httpx.HTTPStatusErrorand generic exceptions, returning error strings instead of raising. This keeps the MCP session alive even if the upstream API fails. -
Forecast de-duplication: OpenWeatherMap's
/forecastreturns 3-hour intervals.get_forecastdeduplicates by date, picking the first entry per calendar day.
Gotchas
get_prompt(...)arguments must be strings in the Python MCP SDK (e.g.{"days": "5"}not5).- The free OpenWeatherMap tier has rate limits (~60 calls/min). The travel agent makes 2 API calls per run.
travel_agent.pyrequiresOPENAI_API_KEYin.env. The server and diagnostic client do not.
License
No license specified.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.