tokencost-dev
LLM pricing oracle — model lookup, cost estimation, and comparison via LiteLLM. An MCP server that gives AI assistants accurate, up-to-date model pricing from the LiteLLM community registry. Look up pricing and capabilities for any LLM with fuzzy matching, estimate costs for token volumes, and compare models by provider, context window, or mode.
README
<p align="center"> <img src="docs/src/assets/tokencost-logo.png" width="120" /> </p> <h1 align="center">tokencost</h1> <p align="center"> <a href="https://www.npmjs.com/package/tokencost-dev"><img src="https://img.shields.io/npm/v/tokencost-dev" alt="npm version" /></a> <a href="https://www.npmjs.com/package/tokencost-dev"><img src="https://img.shields.io/npm/dm/tokencost-dev" alt="npm downloads" /></a> <a href="https://github.com/atriumn/tokencost-dev/actions/workflows/ci.yml"><img src="https://img.shields.io/github/actions/workflow/status/atriumn/tokencost-dev/ci.yml?label=CI" alt="CI status" /></a> <a href="LICENSE"><img src="https://img.shields.io/npm/l/tokencost-dev" alt="license" /></a> </p> <p align="center">Ask your AI assistant "how much does GPT-4o cost?" — get an instant, accurate answer.</p>
<p align="center"> <img src="tokencost-dev.gif" alt="tokencost demo in Claude Code" width="700" /> </p>
Install in 30 seconds
Claude Code:
claude mcp add tokencost-dev -- npx -y tokencost-dev
Then ask: "How much would 1M input tokens cost on claude-sonnet-4-5?"
Cursor (.cursor/mcp.json):
{
"mcpServers": {
"tokencost-dev": {
"command": "npx",
"args": ["-y", "tokencost-dev"]
}
}
}
No API keys. No accounts. No configuration files. Pricing data is fetched from the LiteLLM community registry and cached locally for 24 hours.
Tools
get_model_details
Look up pricing, context window, and capabilities for any model. Fuzzy matching means "sonnet 4.5" works just as well as "claude-sonnet-4-5-20250514".
> "What are Claude Sonnet 4.5's pricing and capabilities?"
Model: claude-sonnet-4-5
Provider: anthropic | Mode: chat
Pricing (per 1M tokens):
Input: $3.00
Output: $15.00
Context Window:
Max Input: 200K
Max Output: 8K
Capabilities: vision, function_calling, parallel_function_calling
calculate_estimate
Estimate cost for a given number of input and output tokens.
> "How much will 1000 input + 500 output tokens cost on Claude Sonnet 4.5?"
Cost Estimate for claude-sonnet-4-5
Input: 1K tokens × $3.00/1M = $0.003000
Output: 500 tokens × $15.00/1M = $0.007500
─────────────────────────────
Total: $0.0105
compare_models
Find the most cost-effective models matching your requirements.
> "What are the cheapest OpenAI chat models?"
Top 2 most cost-effective models (provider: openai) (mode: chat):
1. gpt-4o-mini
Provider: openai | Mode: chat
Input: $0.15/1M | Output: $0.60/1M
Context: 128K in / 16K out
2. gpt-4o
Provider: openai | Mode: chat
Input: $5.00/1M | Output: $15.00/1M
Context: 128K in / 16K out
refresh_prices
Force re-fetch pricing data from the LiteLLM registry (cache is refreshed automatically every 24h).
Docs
Full documentation at tokencost.dev
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.