cloudprice-mcp

cloudprice-mcp

MCP server that lets Claude (or any MCP-compatible client) compare on-demand compute + storage pricing across AWS, Azure, and GCP in real time.

Category
Visit Server

README

cloudprice-mcp

<!-- mcp-name: io.github.alialbaker/cloudprice-mcp -->

PyPI version Python versions License: MIT alialbaker/cloudprice-mcp MCP server

The FinOps MCP server. Gives Claude, GitHub Copilot, Cursor, Windsurf, Cline, Continue, Zed — or any MCP-compatible AI — structured pricing data and analysis primitives across AWS, Azure, GCP, and OCI. AI clients use cloudprice-mcp to compute Reserved Instance break-even, multi-cloud workload TCO, exit-cost migration analyses, snapshot cost modeling, and egress arbitrage — the kind of FinOps decisions that normally live in three browser tabs and a half-built spreadsheet.

16 tools covering compute, block storage, object storage, managed Postgres, egress (internet + inter-region with OCI's 10 TB free tier surfaced explicitly), Multi-AZ workloads, snapshots with realistic incremental modeling, Reserved Instance / Savings Plan discounts, FinOps decision suite (migration, commitment, TCO, egress arbitrage), and multi-cloud price history (the only public weekly-refreshed dataset of its kind). OCI Always Free tier (4 OCPU compute, 20 GB object storage, 10 TB egress) surfaced as $0 line items where it applies.

One-line install configures every AI client you have: pip install cloudprice-mcp && cloudprice-mcp setup — auto-detects Claude Desktop, GitHub Copilot Agent Mode, Cursor, Windsurf, Cline, Continue.dev, and Zed, then asks Y/N before writing each config.

demo

What does FinOps look like with cloudprice-mcp?

Real questions teams actually ask. Paste any of these into Claude / Copilot / Cursor with cloudprice-mcp loaded:

"I have 6× t3.2xlarge running on AWS. Compare the 3-year total cost on-demand vs 1-year Savings Plan vs 3-year RI partial upfront. What's the break-even month?" → AI calls compare_workload, pulls list-price baseline, layers AWS's published RI rates, returns dollar break-even. ~7-month payback typical.

"I'm thinking about offloading 5 TB of cold-tier object storage from AWS S3 to a cheaper provider. Compare archive-tier cost across all 4 clouds, factor in AWS exit egress, and tell me the payback period." → AI calls compare_object_storage + compare_egress, computes one-time exit cost vs ongoing savings. Often surfaces "don't move — AWS Glacier Deep Archive is already tied for cheapest".

"At 50 TB/month internet egress, where am I cheapest? Show the 3-year savings of moving."compare_egress → OCI ~$340/mo, AWS/Azure/GCP ~$4,000/mo. The 12× difference is OCI's 10 TB free tier — a real moat for content/CDN workloads.

"Size a 3-tier SaaS workload: 8 web (4/16), 12 app (8/32), 4 DB (16/64), 5 TB shared SSD, 50 TB HDD bulk, 10 TB/month egress. Compare full-stack monthly cost across all 4 clouds with multi-AZ and 1-year commitment." → AI chains compare_workload + compare_egress, applies multi-AZ multiplier (×2 compute) + commitment discount.

What you get back: dollar numbers traceable to a public catalog, AI-explained tradeoffs, payback periods, and the kind of "don't do that" recommendation that kills bad migrations before they happen. No console-clicking. No tab-switching between three pricing calculators. No FinOps spreadsheet that goes stale the moment a new SKU drops.


Install

Recommended (auto-config):

pip install cloudprice-mcp
cloudprice-mcp setup     # auto-configures every detected MCP client, asks Y/N before writing

Then fully restart whichever clients were configured. 10 tools appear in each. Done.

Trust spectrum:

Command When to use
cloudprice-mcp setup Default — detects every installed client, shows the plan, asks Y/N once
cloudprice-mcp setup --yes Skip prompt (CI / scripts)
cloudprice-mcp setup --client copilot Configure a specific client (repeatable: --client copilot --client cursor)
cloudprice-mcp setup --all Configure every known client even if not detected
cloudprice-mcp setup --force Refresh existing entries — useful after upgrade or moving Python
cloudprice-mcp setup --dry-run Show per-client diffs without writing
cloudprice-mcp setup --print-config Emit per-client JSON to stdout for manual paste
cloudprice-mcp setup --list-clients Detection table — which clients are known + installed on this system
Manual edit Don't trust running new tools — see INSTALL.md per-client sections

If something doesn't work, run:

cloudprice-mcp doctor

It tells you exactly what's broken (Python version, install path, config location, tool registration, command path validity).

Python 3.10+ required.

For step-by-step manual install (Windows / macOS / Linux), see INSTALL.md.

Tools exposed

Single-spec lookups (v0.1)

Tool What it does
get_aws_price Look up an EC2 instance type → vCPUs, memory, hourly + monthly USD (us-east-1)
get_azure_price Look up an Azure VM size → vCPUs, memory, hourly + monthly USD (eastus)
get_gcp_price Look up a GCP Compute Engine machine type → vCPUs, memory, hourly + monthly USD (us-east1)
compare_clouds Given a target spec (vCPUs + GB), return the cheapest matching SKU across AWS / Azure / GCP / OCI, sorted by monthly cost, with savings summary

Bulk + workload compare (v0.2)

Tool What it does
compare_compute_inventory Bulk-compare a list of compute workloads (each with vCPUs / memory / quantity / hours / optional OS disk) across all 4 clouds. Returns per-row matches, per-cloud totals, cheapest cloud.
compare_storage_inventory Bulk-compare a list of block-storage volumes (each with capacity / disk type / quantity) across all 4 clouds.
compare_workload Combined compute + block storage in one call. Mirrors a two-sheet sizing workbook (compute BoM + storage BoM). Optional commitment overlay applies 1-year (30%) or 3-year (50%) compute discount.

Object storage + managed Postgres (v0.3)

Tool What it does
compare_object_storage Bulk-compare object-storage buckets across AWS S3 / Azure Blob / GCP Cloud Storage / OCI Object Storage. Each row specifies capacity_gb + tier (hot / cool / archive). OCI Always Free 20 GB tier surfaced explicitly — capacity ≤ 20 GB on OCI hot tier returns $0/mo.
compare_postgres_database Bulk-compare managed PostgreSQL pricing across AWS RDS / Azure Database for PostgreSQL / GCP Cloud SQL / OCI Database with PostgreSQL. Each row specifies vCPUs / memory / storage_gb. Storage cost is calculated separately from compute.

FinOps decision suite (v0.6, NEW)

Four named tools that turn cross-cloud pricing into FinOps decisions in one call instead of letting the AI chain three+ tools. All four consume a structured workload inventory (compute / storage / object_storage / databases / egress) plus tool-specific options.

Tool What it does
assess_migration "Should I move?" — projects per-target cloud cost, savings %, one-time exit egress cost, payback months. Returns a ranked recommendation by 3-year TCO with triggered caveats (e.g., "OCI A1.Flex is ARM — verify your AMIs").
optimize_commitment "When does my RI / SP / CUD pay back?" — six commitment scenarios (none / 1yr_no_upfront / 1yr_all_upfront / 3yr_no_upfront / 3yr_partial_upfront / 3yr_all_upfront) with per-scenario monthly cost, upfront, 3-year total, savings %, payback months. Recommends the lowest 3-year TCO option.
compare_total_cost_of_ownership "What's my 3-year cost across clouds?" — multi-year projection with linear YoY growth assumptions for compute / storage / egress. Returns cumulative TCO per cloud, year-by-year breakdown, sensitivity analysis on the dominant variable. The kind of number that goes into board decks.
find_egress_arbitrage "Where do I save on data transfer?" — specialized assess_migration scoped to egress only. Surfaces the OCI 12× moat: at 50 TB/month internet egress, OCI is ~$340 vs $4,000+ on the hyperscalers.

All four tools accept a WorkloadInventory shape that mirrors a 4-section sizing sheet (compute / storage / object_storage / databases / egress) plus optional commitment, multi_az, and one_time.data_to_migrate_gb fields. Output includes honest_gaps — explicit list of what each tool does NOT model — to prevent over-trust.

Egress + Multi-AZ + better snapshots (v0.5, NEW)

Tool / Feature What it does
compare_egress Compare data-transfer costs across all 4 clouds. Two directions: out_to_internet (tiered pricing with free-tier credits — AWS/Azure 100 GB, OCI 10 TB) and inter_region (cross-region within the same cloud). At 50 TB/month internet egress, OCI is ~12× cheaper than the hyperscalers — a real moat for content/CDN workloads.
compare_workload multi_az: true New flag doubles compute totals on every cloud to model Multi-AZ / HA deployments (sync replicas across two zones). Storage stays at 1× because object/block storage is usually cross-AZ at base price.
snapshot_incremental_factor New per-row field on storage and OS-disk snapshots. Default 1.0 keeps the v0.2 upper-bound estimate. Set to 0.3 for typical real-world incremental dedup, or 0.0 to exclude snapshots from the total.

Example: compare_workload input shape

{
  "compute": [
    { "name": "web", "tier": "Web", "vcpus": 4, "memory_gb": 16, "quantity": 8,  "os_disk_gb": 100, "os_disk_type": "ssd" },
    { "name": "app", "tier": "App", "vcpus": 8, "memory_gb": 32, "quantity": 12, "os_disk_gb": 200, "os_disk_type": "ssd" },
    { "name": "db",  "tier": "DB",  "vcpus": 16, "memory_gb": 64, "quantity": 4, "os_disk_gb": 500, "os_disk_type": "ssd" }
  ],
  "storage": [
    { "name": "shared-fast", "tier": "DB",  "capacity_gb": 5000,  "disk_type": "ssd" },
    { "name": "shared-bulk", "tier": "App", "capacity_gb": 50000, "disk_type": "hdd" }
  ]
}

Snapshots (v0.2.1)

snapshot_count on storage rows and os_disk_snapshot_count on compute rows are now priced. Snapshot rates per cloud per disk type are bundled (~$0.05/GB-mo for AWS/Azure, ~$0.026/GB-mo for GCP).

Caveat — upper-bound estimate: snapshots are priced as snapshot_per_gb_month × full_capacity × quantity × snapshot_count. Real-world snapshots are incremental (only changed blocks), so actual cost is typically 20-50% of this model's number. If snapshots dominate your total, ask the cloud's calculator for a tighter estimate.

iops and throughput_mbs on storage rows are still accepted as metadata only — not used for SKU matching in this release.

Reserved Instance / Savings Plan estimator (v0.2.1)

compare_workload accepts an optional commitment parameter:

Value Compute discount Use case
none (default) 0% On-demand only
1yr_no_upfront 30% 1-year AWS Savings Plan / Azure RI / GCP CUD (no upfront)
3yr_partial_upfront 50% 3-year, partial upfront — typical "we know our baseline" deals

Storage and snapshots are not discounted (most clouds don't offer meaningful storage commitments). Discount tiers are conservative averages — your actual rate depends on instance family, payment option, and region.

Pricing data

Prices are bundled as a curated dataset of common SKUs across 4 clouds:

  • Compute (~50 VM SKUs across AWS / Azure / GCP / OCI, including OCI A1 Always Free + A2 Arm Ampere + E5 Flex)
  • Block storage (SSD + HDD per cloud)
  • Object storage (Hot / Cool / Archive tiers per cloud, including OCI Always Free 20 GB)
  • Managed PostgreSQL (RDS / Azure DB / Cloud SQL / OCI Database with PostgreSQL)

Auto-refreshed weekly (v0.7+)

The bundled catalog is refreshed every Sunday by a GitHub Action that hits each cloud's public pricing API:

Each refresh writes a dated snapshot to src/cloudprice_mcp/data/prices/YYYY-MM-DD.json — every JSON ever published lives in the repo. The history archive is MIT-licensed and grows with every release.

Every tool result includes the catalog's as_of field so you know exactly which prices were used.

Public price history dataset (v0.7.1+)

cloudprice-mcp is the only FinOps tool we know of that preserves every weekly snapshot. You can query "what did m5.xlarge cost in May?" — neither AWS Calculator nor GCP Estimator can answer that because their pages always show today.

Query the history from the CLI:

cloudprice-mcp history --cloud oci --sku VM.Standard.E5.Flex.4OCPU
# oci/VM.Standard.E5.Flex.4OCPU (us-ashburn-1) — 2 data point(s)
#
# AS_OF          HOURLY USD
# --------------------------
# 2026-04-26   $    0.67600
# 2026-05-12   $    0.18400
#
# Change: -72.78% ($-0.49200/h)

The -72.78% drop is the v0.7.0 auto-refresh fixing a hand-curated inaccuracy in the prior OCI catalog — proof that the auto-refresh story works.

Query the history from AI assistants via two new MCP tools:

  • get_price_history(cloud, sku, since?) — full timeseries + change stats
  • list_tracked_skus(cloud?, since?) — every (cloud, sku) pair we have history for

Real questions this unlocks:

"Has AWS m5.xlarge changed price in the last quarter?" → AI calls get_price_history, returns timeseries with start/end prices and % change.

"Show me every multi-cloud price mover since January." → AI calls list_tracked_skus(since="2026-01-01"), returns every SKU + its latest price + change.

What's NOT modeled (real-world TCO killers)

  • Egress / data transfermodeled in v0.5 (compare_egress)
  • Multi-AZ / HA replicasmodeled in v0.5 (multi_az: true on compare_workload)
  • Snapshots upper-bound onlyfixed in v0.5 (snapshot_incremental_factor)
  • Reserved/Savings Plan SKU detail (we apply a flat tier discount, not per-region/per-family detail) — roadmap
  • Multi-region pricing (currently us-east only; us-west / eu-west planned for v0.5.1) — roadmap
  • IOPS-based storage matching (capacity-only) — roadmap
  • Backup storage charges (some clouds free, others billed) — roadmap
  • Request costs (PUT/GET pricing for object storage) — roadmap
  • Retrieval costs for archive tiers (Glacier-style retrieval can be 10× the storage cost) — roadmap
  • VPC peering / interconnect costs — roadmap

These are tracked roadmap items. Use cloudprice-mcp for the on-demand list-price baseline; do final TCO analysis with each cloud's own calculator before relying on numbers for big decisions.

Live runtime pricing (not just weekly refresh) is being considered for v0.8 — would fetch prices directly at MCP tool invocation time instead of from the bundled catalog. Trade-offs: slower (network call per tool use), adds GCP auth requirement, breaks offline mode. The v0.7 weekly auto-refresh covers ~95% of the credibility win at zero runtime cost; live mode is opt-in territory.

Develop locally

git clone https://github.com/Albaker-Group/cloudprice-mcp.git
cd cloudprice-mcp
pip install -e ".[dev]"
pytest

To point Claude Desktop at your dev copy, swap the command in the config:

{
  "mcpServers": {
    "cloudprice": {
      "command": "python",
      "args": ["-m", "cloudprice_mcp.server"]
    }
  }
}

License

MIT — see LICENSE.

Credits

Built by Ali Albaker, multi-cloud architect — runs a live three-cloud portfolio at ~$1.80/month across AWS, Azure, and GCP, with OCI joining as the 4th cloud in 2026.

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured