query-sanitizer-mcp
A local DLP middleware that redacts sensitive information from prompts using local models before they reach external LLMs. It provides tools to sanitize queries, restore placeholders in responses, and manage a ledger of redactions to maintain data privacy.
README
query-sanitizer-mcp
A lightweight MCP middleware that sits between your prompts and external LLMs, automatically redacting sensitive data using a local model (Ollama / LM Studio) before anything leaves your machine.
[Your Prompt] → sanitize_query() → [Safe Prompt] → External LLM → [Response] → restore_response() → [You]
Why
Every time you paste internal context into Claude, ChatGPT, or any cloud LLM, you risk leaking:
- Employee names & emails
- Internal project codenames
- Infrastructure details (IPs, hostnames, DB names)
- API keys & credentials
- Company names, deal sizes, legal references
This MCP server intercepts that text, runs it through a local DLP model, replaces sensitive tokens with typed placeholders ([ORG_NAME_1], [PII_NAME_1], etc.), and restores them in the response — so you see natural text, the cloud LLM never sees the real values.
Tools
| Tool | Description |
|---|---|
sanitize_query(text) |
Redact sensitive data. Returns safe text + san_id for later restore. |
restore_response(text, san_id) |
Swap placeholders back to originals using the ledger. |
view_ledger(last_n) |
Show recent sanitization history. |
Setup
Requirements: Python 3.10+, Ollama or LM Studio running locally.
git clone https://github.com/vidoluco/query-sanitizer-mcp
cd query-sanitizer-mcp
python3.12 -m venv .venv
.venv/bin/pip install fastmcp
Start your local model
# Ollama
ollama pull llama3.2
ollama serve
# LM Studio — just load a model and start the local server on port 1234
Add to Claude Code
Merge into ~/.claude/settings.json:
{
"mcpServers": {
"query-sanitizer": {
"command": "/path/to/query-sanitizer-mcp/.venv/bin/python",
"args": ["/path/to/query-sanitizer-mcp/server.py"],
"env": {
"SANITIZER_MODEL_URL": "http://localhost:11434/v1/chat/completions",
"SANITIZER_MODEL_NAME": "llama3.2"
}
}
}
}
For LM Studio, change the env vars:
"SANITIZER_MODEL_URL": "http://localhost:1234/v1/chat/completions",
"SANITIZER_MODEL_NAME": "your-loaded-model-name"
Configuration
Create .sanitizer-ledger/config.json to boost detection accuracy for your org:
{
"org_names": ["Acme Corp", "Acme"],
"org_domains": ["acme.com", "acme.internal"],
"project_codenames": ["Phoenix", "Titan"],
"known_employees": ["John Smith"],
"internal_ip_ranges": ["10.0.0.0/8", "172.16.0.0/12"],
"always_allow": ["Google Cloud", "Kubernetes", "BigQuery"]
}
Or run the included CLI:
python scripts/ledger.py init-config
How it works
The local model receives a strict DLP system prompt and returns JSON with:
sanitized_text— the safe version of your promptmappings— a list of what was replaced and why
A ledger entry (.sanitizer-ledger/ledger.jsonl) is written per operation, enabling the restore step. Credentials are blocked entirely — never stored, never passed through.
Redaction categories
| Category | Examples | Severity |
|---|---|---|
CREDENTIAL |
API keys, tokens, passwords | CRITICAL — blocked |
INTERNAL_URL |
Intranet URLs, staging endpoints | CRITICAL |
PII_NAME |
Names, emails, phone numbers | HIGH |
ORG_NAME |
Company / subsidiary names | HIGH |
PROJECT_NAME |
Internal codenames | MEDIUM |
INFRA |
IPs, hostnames, DB names | MEDIUM |
FINANCIAL |
Revenue, deal sizes, budgets | MEDIUM |
LEGAL |
Contract terms, case numbers | HIGH |
Contributing
This is an early proof of concept — feedback and contributions very welcome.
Ideas for where this could go:
- [ ] Auto-suggest ledger config entries from detected patterns
- [ ] Claude Code hook integration (pre-prompt hook that auto-sanitizes)
- [ ] Confidence threshold config
- [ ] Batch / bulk sanitization mode
- [ ] Support for code block scanning (inline secrets, import paths)
- [ ] Web UI for ledger review
Open an issue or send a PR.
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.