invinoveritas
invinoveritas is a Lightning-paid AI reasoning and structured decision intelligence MCP server. It allows autonomous agents to get high-quality strategic analysis (/reason) and structured JSON decisions (/decision) by paying per request with Bitcoin Lightning using the L402 protocol.
README
⚡ invinoveritas
Lightning-paid reasoning and decision intelligence for autonomous agents
Pay-per-insight API using the L402 protocol (Bitcoin Lightning).
No subscriptions. No accounts. No KYC.
Built primarily for autonomous AI agents that need high-quality reasoning and structured decisions without increasing local compute costs.
Live API: https://invinoveritas.onrender.com
Why Agents Use invinoveritas
Autonomous agents call invinoveritas when they need:
- Strategic reasoning deeper than local models
- Structured, machine-readable decisions with confidence scores
- Deterministic cost per request (instead of unpredictable token usage)
- Cryptographically verifiable payments via Lightning
Agents pay only when confidence matters — with sats, atomically, without intermediaries.
Quick Start for Agents
Option 1 — MCP Server (Recommended)
Works seamlessly with Claude Desktop, Cursor, and any MCP-compatible agent.
pip install mcp requests lndgrpc pyln-client
python mcp_server.py
Then add it to your agent configuration.
Option 2 — CLI Client
python agent_client.py --endpoint reason --question "What are the biggest risks for Bitcoin in 2026?"
python agent_client.py \
--endpoint decision \
--goal "Grow capital safely" \
--context "Mostly BTC with some cash reserves" \
--question "Should I increase exposure in the next 30 days?"
Option 3 — Direct HTTP
# 1. Check price
curl https://invinoveritas.onrender.com/price/reason
# 2. Request reasoning (returns 402 + invoice)
curl -X POST https://invinoveritas.onrender.com/reason \
-H "Content-Type: application/json" \
-d '{"question": "Should I increase my BTC exposure right now?"}'
# 3. After paying the invoice, retry with credentials
curl -X POST https://invinoveritas.onrender.com/reason \
-H "Content-Type: application/json" \
-H "Authorization: L402 <payment_hash>:<preimage>" \
-d '{"question": "Should I increase my BTC exposure right now?"}'
Core Endpoints
| Endpoint | Purpose | Output Type | Typical Cost |
|---|---|---|---|
POST /reason |
Strategic reasoning | Natural language | ~500–700 sats |
POST /decision |
Structured decision intelligence | Clean JSON | ~1000–1200 sats |
Response Examples
/reason
{
"status": "success",
"type": "premium_reasoning",
"answer": "..."
}
/decision
{
"status": "success",
"type": "decision_intelligence",
"result": {
"decision": "Increase exposure slightly",
"confidence": 0.78,
"reasoning": "Market structure improving while risk remains moderate.",
"risk_level": "medium"
}
}
Payment Flow (L402)
- POST to
/reasonor/decision→ receive HTTP 402 with bolt11 invoice - Pay the invoice with any Lightning wallet or node
- Retry the same request with:
Authorization: L402 <payment_hash>:<preimage> - Receive the AI response
The MCP server and agent_client.py handle the full flow automatically.
For Autonomous Agents
The recommended way is using the included mcp_server.py.
It exposes reason and decision as native MCP tools. Agents simply call the tool — payment is handled transparently in the background.
Also included:
agent_client.py— simple CLI for scripts- Freqtrade strategy example using
/decisionas a confidence gate
Discovery Endpoints (Always Free)
| Endpoint | Purpose |
|---|---|
GET /health |
Service status and metadata |
GET /price/{endpoint} |
Current price in sats |
GET /tool |
Machine-readable tool definition |
GET /.well-known/ai-plugin.json |
Standard agent discovery |
GET /docs |
Interactive API docs |
Project Structure
invinoveritas/
├── app.py # Main API (deployed on Render)
├── ai.py # Reasoning and decision logic
├── config.py
├── node_bridge.py # Talks to Lightning bridge
├── bridge.py # LND bridge (runs on VPS)
├── mcp_server.py # MCP server for Claude, Cursor, etc.
├── agent_client.py # CLI client
├── examples/
│ └── invinoveritas_strategy.py
├── index.html
├── requirements.txt
└── README.md
Deployment
Main API — Render
Lightning Bridge — VPS running LND
Environment Variables (Render):
OPENAI_API_KEYNODE_URL(your Lightning bridge URL)
Philosophy
Most AI agents are forced into monthly subscriptions or unpredictable token costs.
invinoveritas offers a better model:
Atomic intelligence purchases using Bitcoin Lightning.
Agents pay only when the decision truly matters — with sats, cryptographically, and without intermediaries.
Built for the Bitcoin × AI future. ⚡
Quick Links
- GitHub: https://github.com/babyblueviper1/invinoveritas
- Live API: https://invinoveritas.onrender.com
- MCP Server:
mcp_server.py - Health:
/health
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.