Sparky Crypto Prices Oracle
Free real-time cached CoinGecko USD prices + 24h change. No key, no signup, 15/min rate limit. x402 paid v2 soon. Perfect for trading/research agents.
README
Sparky Tools API ā Crypto + Web Fetch
Free HTTP API for AI Agents ā Real-time cryptocurrency prices + intelligent web scraping. No API keys, no rate limits for personal use, built for OpenClaw and other AI agent platforms.
Keywords: ai agent api, crypto price api, web scraping api, openclaw tools, agent web fetch, mcp alternative, free crypto api, structured web extraction, ai agent http tools, fastapi agent server
š Table of Contents
- Features
- Quick Start
- API Reference
- OpenClaw Agent Integration
- Use Cases
- Deployment
- Architecture
- Configuration
- License
⨠Features
| Feature | Endpoint | Cache | Rate Limit |
|---|---|---|---|
| Crypto Prices | /prices |
45 sec | 15/min |
| Web Scraping | /fetch |
10 min | 10/min |
| Health Check | /health |
none | none |
| Agent Discovery | /.well-known/agent.json |
none | none |
Crypto Prices (/prices)
- ā 1000+ cryptocurrencies via CoinGecko
- ā Real-time USD prices + 24h change
- ā No CoinGecko API key required
- ā Smart caching reduces API calls
Web Fetch (/fetch)
- ā Intelligent content extraction ā no HTML garbage
- ā Extracts: title, meta, article text, tables, links, images
- ā
Uses
trafilatura+BeautifulSoup(battle-tested stack) - ā Perfect for AI agents that need clean web content
- ā Caches results per URL (10 minutes)
For AI Agents
- ā OpenClaw compatible ā HTTP REST, no MCP complexity
- ā Structured JSON responses
- ā
.well-known/agent.jsonfor auto-discovery - ā Copy-paste ready integration examples
š Quick Start
# 1. Clone
git clone https://github.com/chuddyrudd/sparky-crypto-prices.git
cd sparky-crypto-prices
# 2. Setup
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# 3. Run
python3 app.py
# Server starts on http://localhost:8000
# 4. Test
curl http://localhost:8000/health
š API Reference
GET /prices
Real-time cryptocurrency prices.
curl "http://localhost:8000/prices?coins=bitcoin,ethereum,solana"
Parameters:
| Name | Type | Default | Description |
|---|---|---|---|
coins |
string | bitcoin,ethereum,solana,cardano |
Comma-separated CoinGecko IDs |
Response:
{
"timestamp": "2026-02-26T11:45:00",
"prices": {
"bitcoin": { "usd": 67245.00, "usd_24h_change": 2.5 },
"ethereum": { "usd": 3521.40, "usd_24h_change": -1.2 },
"solana": { "usd": 145.20, "usd_24h_change": 5.1 }
},
"source": "CoinGecko cached via Sparky",
"note": "Upgrade to paid v2 coming"
}
Supported Coins: Any CoinGecko ID. Common: bitcoin, ethereum, solana, cardano, polkadot, dogecoin, chainlink, avalanche
GET /fetch
Intelligent web content extraction.
curl "http://localhost:8000/fetch?url=example.com"
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
url |
string | Yes | Any URL (https:// auto-added if missing) |
Response:
{
"timestamp": "2026-02-26T11:45:00",
"url": "https://example.com",
"title": "Example Domain",
"meta_description": "This domain is for use in illustrative examples...",
"clean_content": "Clean article text without ads or HTML tags...",
"tables": [],
"links": ["https://example.com/page1", "https://example.com/page2"],
"images": ["https://example.com/image.jpg"],
"source": "Sparky Web Fetch (requests + trafilatura)",
"note": "Clean, no ads, no HTML garbage. Use ?url=example.com"
}
Use Cases:
- Research: Extract article content for AI analysis
- Data mining: Scrape structured data from websites
- Monitoring: Track changes on web pages
- Integration: Feed clean content to LLMs
GET /health
Health check endpoint.
curl http://localhost:8000/health
Response:
{
"status": "ok",
"version": "v1-crypto+webfetch",
"uptime": 1772120121.51
}
GET /.well-known/agent.json
Agent discovery card for AI platforms.
curl http://localhost:8000/.well-known/agent.json
Response:
{
"name": "Sparky Tools Oracle",
"description": "Crypto prices + Web Fetch (clean structured URL to JSON). Free v1.",
"url": "https://your-tunnel-url/prices or /fetch?url=",
"capabilities": ["crypto-prices", "web-fetch", "structured-scrape"],
"protocol": "http"
}
š¤ OpenClaw Agent Integration
This API is built for OpenClaw and other AI agent platforms.
Basic Usage
# Get crypto prices
result = web_fetch("http://localhost:8000/prices?coins=bitcoin,ethereum")
# Returns: {"prices": {"bitcoin": {"usd": 67245, ...}}}
# Scrape web content
result = web_fetch("http://localhost:8000/fetch?url=news.ycombinator.com")
# Returns: {"title": "Hacker News", "clean_content": "...", "links": [...]}
Advanced Agent Workflows
# 1. Research workflow
def research_topic(topic):
# Search for topic
search_url = f"https://en.wikipedia.org/wiki/{topic}"
content = web_fetch(f"http://localhost:8000/fetch?url={search_url}")
return content["clean_content"]
# 2. Crypto tracking workflow
def track_crypto_portfolio(coins):
prices = web_fetch(f"http://localhost:8000/prices?coins={coins}")
return prices["prices"]
# 3. Combined workflow
def analyze_crypto_news(coin):
# Get price
price_data = web_fetch(f"http://localhost:8000/prices?coins={coin}")
# Get news context
news = web_fetch(f"http://localhost:8000/fetch?url=coinmarketcap.com/currencies/{coin}/news")
return {"price": price_data, "context": news["clean_content"]}
Why HTTP REST > MCP for Agents
| HTTP REST (This API) | MCP | |
|---|---|---|
| Setup | One URL, instant | Complex config, stdio |
| Compatibility | Works everywhere | Only MCP-aware clients |
| Debugging | curl, browser | Harder to troubleshoot |
| Agent Access | web_fetch() tool |
Special client required |
| Discovery | .well-known/agent.json |
Manual configuration |
š” Use Cases
For Crypto Traders
# Track Bitcoin price
curl "localhost:8000/prices?coins=bitcoin"
# Track portfolio
curl "localhost:8000/prices?coins=bitcoin,ethereum,solana,cardano,polkadot"
For Researchers
# Extract article content
curl "localhost:8000/fetch?url=medium.com/article-about-ai"
# Scrape documentation
curl "localhost:8000/fetch?url=docs.python.org/3/tutorial"
For AI Agents
# Your OpenClaw agent can now:
# 1. Check crypto prices
# 2. Scrape web content
# 3. Build knowledge bases
# 4. Monitor websites for changes
š Deployment
Local Development
python3 app.py
Public URL (Cloudflare Tunnel)
cloudflared tunnel --url http://localhost:8000
Production (PM2)
npm install -g pm2
pm2 start app.py --name sparky-api --interpreter python3
pm2 save
pm2 startup
Production (systemd)
# Copy service file (create your own)
sudo cp sparky-api.service /etc/systemd/system/
sudo systemctl enable sparky-api
sudo systemctl start sparky-api
šļø Architecture
āāāāāāāāāāāāāāā HTTP GET āāāāāāāāāāāāāāāāāāā HTTPS āāāāāāāāāāāāāāāā
ā Client āāāāāāāāāāāāāāāāāāāŗā FastAPI Server āāāāāāāāāāāāāāāŗā CoinGecko ā
ā (Any HTTP) ā JSON Response ā (this repo) ā REST API ā API ā
āāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāā
ā
ā¼
āāāāāāāāāāāāāāāāāāāāā
ā Web Pages ā
ā (any URL) ā
āāāāāāāāāāāāāāāāāāāāā
Stack:
- Framework: FastAPI (high-performance Python)
- Web Extraction: trafilatura + BeautifulSoup + lxml
- Rate Limiting: slowapi
- Caching: cachetools (TTLCache)
- Server: uvicorn (ASGI)
āļø Configuration
Create .env file:
PORT=8000
TELEGRAM_BOT_TOKEN=your_token_here
TELEGRAM_BOT_CHAT_ID=your_chat_id_here
Optional: Telegram notifications on first external hit.
š License
MIT Ā© 2026 chuddyrudd
Built for AI Agents. Powered by OpenClaw. Free forever.
Search: ai agent api, crypto api free, web scraping api, openclaw agent tools, fastapi agent server, mcp alternative http
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.