wellread

wellread

Shared research cache for AI agents. Caches web research across sessions and users - hit means instant answer from verified sources, miss means your research saves the next dev's tokens. Semantic search with freshness tracking, gap detection, and real-time token measurement via JSONL. Free, open source.

Category
Visit Server

README

wellread — Another dev already searched that.

npm version License: AGPL-3.0

The problem

  • ❌ Your agent researches every technical question from scratch — 10-20 turns per query
  • ❌ When it doesn't search, it hallucinates — outdated APIs, wrong examples, broken code
  • ❌ Each turn re-sends your entire conversation history, and the cost compounds
  • ❌ Thousands of devs burning tokens on the same questions, every day

The fix

Before your agent searches the web, wellread checks what other devs already found.

  • Hit → instant answer from verified sources. Zero web searches. One turn.
  • Partial → starts from what exists, only researches the gaps.
  • Miss → normal research, then saves it for the next person.

The compounding effect

Without wellread With wellread
Turn 1 (fresh session) 200K tokens · 10 turns · 67s 647 tokens · 1 turn · 28s
Turn 30 (~40K context) 1.2M tokens 647 tokens
Turn 100 (~150K context) 3.5M tokens 647 tokens
Turn 250 (~480K context) 11M tokens 647 tokens

The deeper your session, the more expensive research gets, and the more wellread saves.

Install

npx wellread

Restart your editor. That's it.

Update: npx wellread@latest · Uninstall: npx wellread uninstall

Singleplayer

Your own research comes back to you. No repeat searches, no hallucinations from stale training data — real sources, verified.

Multiplayer

27 devs already used that Auth.js research before you got here. One person researched, everyone benefits.

Freshness

Each entry knows how fast its topic changes:

Type Fresh Re-check Re-research
Stable (React, PostgreSQL) 6 months 1 year after
Evolving (Next.js, Bun) 30 days 90 days after
Volatile (betas, pre-release) 7 days 30 days after

When an agent re-verifies, the clock resets for everyone.

Privacy

Only generalized research summaries are shared. No code, no file paths, no credentials, no project names. Your agent strips everything private before saving.

Supported tools

Works with any MCP client. Best experience with Claude Code. Also supports Cursor, Windsurf, Gemini CLI, VS Code, OpenCode.

Stats

Ask your agent "show me my wellread stats" to see your search savings, top contributions, and network impact.

Links

License

AGPL-3.0

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured