mcp-arsr
Adaptive Retrieval-Augmented Self-Refinement MCP Server — a closed-loop system that lets LLMs iteratively verify and correct their own claims using uncertainty-guided retrieval.
README
ARSR MCP Server
Adaptive Retrieval-Augmented Self-Refinement — a closed-loop MCP server that lets LLMs iteratively verify and correct their own claims using uncertainty-guided retrieval.
What it does
Unlike one-shot RAG (retrieve → generate), ARSR runs a refinement loop:
Generate draft → Decompose claims → Score uncertainty
↑ ↓
Decide stop ← Revise with evidence ← Retrieve for low-confidence claims
The key insight: retrieval is guided by uncertainty. Only claims the model is unsure about trigger evidence fetching, and the queries are adversarial — designed to disprove the claim, not just confirm it.
Architecture
The server exposes 6 MCP tools. The outer LLM (Claude, GPT, etc.) orchestrates the loop by calling them in sequence:
| # | Tool | Purpose |
|---|---|---|
| 1 | arsr_draft_response |
Generate initial candidate answer (returns is_refusal flag) |
| 2 | arsr_decompose_claims |
Split into atomic verifiable claims |
| 3 | arsr_score_uncertainty |
Estimate confidence via semantic entropy |
| 4 | arsr_retrieve_evidence |
Web search for low-confidence claims |
| 5 | arsr_revise_response |
Rewrite draft with evidence |
| 6 | arsr_should_continue |
Decide: iterate or finalize |
Inner LLM: Tools 1-5 use Claude Haiku internally for intelligence (query generation, claim extraction, evidence evaluation). This keeps costs low while the outer model handles orchestration.
Refusal detection: arsr_draft_response returns a structured is_refusal flag (classified by the inner LLM) indicating whether the draft is a non-answer. When is_refusal is true, downstream tools (decompose, revise) pivot to extracting claims from the original query and building an answer from retrieved evidence instead of trying to refine a refusal.
Web Search: arsr_retrieve_evidence uses the Anthropic API's built-in web search tool — no external search API keys needed.
Setup
Prerequisites
- Node.js 18+
- An Anthropic API key
Install & Build
cd arsr-mcp-server
npm install
npm run build
Environment
export ANTHROPIC_API_KEY="sk-ant-..."
Run
stdio mode (for Claude Desktop, Cursor, etc.):
npm start
HTTP mode (for remote access):
TRANSPORT=http PORT=3001 npm start
Claude Desktop Configuration
Add to your claude_desktop_config.json:
Npm:
{
"mcpServers": {
"arsr": {
"command": "npx",
"args": ["@jayarrowz/mcp-arsr"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-...",
"ARSR_MAX_ITERATIONS": "3",
"ARSR_ENTROPY_SAMPLES": "3",
"ARSR_RETRIEVAL_STRATEGY": "adversarial",
"ARSR_INNER_MODEL": "claude-haiku-4-5-20251001"
}
}
}
}
Local build:
{
"mcpServers": {
"arsr": {
"command": "node",
"args": ["/path/to/arsr-mcp-server/dist/src/index.js"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-...",
"ARSR_MAX_ITERATIONS": "3",
"ARSR_ENTROPY_SAMPLES": "3",
"ARSR_RETRIEVAL_STRATEGY": "adversarial",
"ARSR_INNER_MODEL": "claude-haiku-4-5-20251001"
}
}
}
}
How the outer LLM uses it
The orchestrating LLM calls the tools in sequence:
1. draft = arsr_draft_response({ query: "When was Tesla founded?" })
// draft.is_refusal indicates if the inner LLM refused to answer
2. claims = arsr_decompose_claims({ draft: draft.draft, original_query: "When was Tesla founded?", is_refusal: draft.is_refusal })
3. scored = arsr_score_uncertainty({ claims: claims.claims })
4. low = scored.scored.filter(c => c.confidence < 0.85)
5. evidence = arsr_retrieve_evidence({ claims_to_check: low })
6. revised = arsr_revise_response({ draft: draft.draft, evidence: evidence.evidence, scored: scored.scored, original_query: "When was Tesla founded?", is_refusal: draft.is_refusal })
7. decision = arsr_should_continue({ iteration: 1, scored: revised_scores })
→ if "continue": go to step 2 with revised text
→ if "stop": return revised.revised to user
Configuration
All settings can be overridden via environment variables, falling back to defaults if unset:
| Setting | Env var | Default | Description |
|---|---|---|---|
max_iterations |
ARSR_MAX_ITERATIONS |
3 |
Budget limit for refinement loops |
confidence_threshold |
ARSR_CONFIDENCE_THRESHOLD |
0.85 |
Claims above this skip retrieval |
entropy_samples |
ARSR_ENTROPY_SAMPLES |
3 |
Rephrasings for semantic entropy |
retrieval_strategy |
ARSR_RETRIEVAL_STRATEGY |
adversarial |
adversarial, confirmatory, or balanced |
inner_model |
ARSR_INNER_MODEL |
claude-haiku-4-5-20251001 |
Model for internal intelligence |
Cost estimate
Per refinement loop iteration (assuming ~5 claims, 3 low-confidence):
- Inner LLM calls: ~6-10 Haiku calls ≈ $0.002-0.005
- Web searches: 6-9 queries ≈ included in API
- Typical total for 2 iterations: < $0.02
Images
Before:
<img width="956" height="977" alt="claude_596yXSQSOh" src="https://github.com/user-attachments/assets/95771a10-8a29-4900-b128-67af3cbc05bd" />
After:
<img width="856" height="866" alt="claude_UagHKfgqDz" src="https://github.com/user-attachments/assets/340e8011-4c2d-4e95-9c4d-43a55e87b7be" />
<img width="800" height="342" alt="claude_WZGa6xqUip" src="https://github.com/user-attachments/assets/dbc364c2-1925-427a-a979-cd1fade38f1d" />
<img width="777" height="578" alt="claude_KedQnUoSue" src="https://github.com/user-attachments/assets/0e57f578-a9c2-4325-9b6e-61d7a42f3ee8" />
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.