Token-Efficient MCP Server
Provides sandboxed code execution and data processing for CSVs and logs to achieve over 95% token savings. It enables secure multi-language execution and progressive tool disclosure to optimize LLM context usage.
README
Token-Efficient MCP Server
A project-agnostic MCP (Model Context Protocol) server that provides 95%+ token savings through sandboxed data processing, progressive tool loading, and multi-language code execution.
๐ Key Features
Multi-Language Code Execution
Execute code in sandboxed environment:
- Python, Bash, Node.js/JavaScript support
- Critical for agent systems (initializer, coding, tester, verifier)
- Run commands, tests, and validations with 98% token savings
Progressive Tool Disclosure
Load tools on-demand to reduce context usage:
- Level 1: Tool names only (~100 tokens)
- Level 2: Names + summaries (~2K tokens)
- Level 3: Full definitions (only when needed)
Sandboxed Data Processing
Process data securely before returning to context:
- CSV filtering and aggregation (99% savings)
- Log file analysis (95% savings)
- Code execution with output filtering
- Token measurement and optimization
Project Agnostic
Works with any project that needs:
- Multi-language code execution
- Large dataset processing
- Log analysis
- Token optimization
๐ฆ Installation
# Clone the repository
git clone https://github.com/your-repo/token-efficient-mcp.git
cd token-efficient-mcp
# Install dependencies
npm install
# Build TypeScript
npm run build
โ๏ธ Configuration
Add to your global ~/.claude.json:
{
"mcpServers": {
"token-efficient": {
"command": "srt",
"args": [
"node",
"/path/to/token-efficient-mcp/dist/index.js"
]
}
}
}
Note: The srt command provides OS-level sandboxing via sandbox-exec (macOS) or bubblewrap (Linux).
๐ ๏ธ Available Tools
1. execute_code
Execute code in multiple languages with sandboxing.
// Run bash commands
execute_code({
code: "npm test",
language: "bash"
})
// Run Python scripts
execute_code({
code: "import sys; print(sys.version)",
language: "python"
})
// Run Node.js code
execute_code({
code: "console.log('Hello from Node')",
language: "node"
})
// Check health endpoint
execute_code({
code: "curl -m 3 http://localhost:8000/api/health",
language: "bash"
})
Supported Languages: python, bash, sh, node, javascript
2. list_token_efficient_tools
Discover available tools with progressive disclosure.
// Level 1: Names only (100 tokens)
list_token_efficient_tools({ level: "names_only" })
// Level 2: Summaries (2K tokens)
list_token_efficient_tools({ level: "summary" })
// Level 3: Full definitions
list_token_efficient_tools({ level: "full" })
4. process_csv
Process CSV files with filtering and aggregation.
// Example: Find expensive stocks
process_csv({
file_path: "data/stocks.csv",
filter_expr: "price > 100 and volume > 1000000",
columns: ["symbol", "price", "volume", "change"],
limit: 10,
response_format: "summary"
})
5. process_logs
Filter and analyze log files efficiently.
// Example: Find all errors with context
process_logs({
file_path: "logs/application.log",
pattern: "ERROR|CRITICAL",
context_lines: 2,
limit: 50,
response_format: "summary"
})
6. get_token_savings_report
Get optimization tips and savings potential.
๐ Token Savings Examples
Code Execution
// Without execute_code: Multi-turn conversation
// Agent: "Should I run npm test?" โ User: "Yes" โ Run โ Parse output
// Estimated: 5,000+ tokens across multiple turns
// With execute_code: Single call
execute_code({ code: "npm test", language: "bash" })
// Returns: { success: true, output: "Tests passed", exit_code: 0 }
// Result: 200 tokens (98% savings)
CSV Processing
// Without optimization: 200,000 tokens
// All 10,000 rows returned to context
// With token-efficient MCP: 2,000 tokens (99% savings)
// Only 100 filtered rows returned
Log Analysis
// Without optimization: 500,000 tokens
// All 100,000 log lines returned
// With token-efficient MCP: 5,000 tokens (99% savings)
// Only 500 matching lines with context returned
Tool Loading
// Traditional MCP: 150,000 tokens
// All tool definitions loaded at startup
// Token-efficient MCP: 2,000 tokens (98.7% savings)
// Tools loaded on-demand
๐ Security
The server uses OS-level sandboxing via srt wrapper:
- Filesystem isolation: Limited to temp directories for code execution
- Network restrictions: No outbound connections by default
- Process monitoring: Timeouts (1-300s) and resource limits
- Multi-language support: Sandboxed Python, Bash, Node.js execution
๐งช Testing
# Build the project
npm run build
# Test execute_code tool
node -e "
const { exec } = require('child_process');
const code = \`echo 'Hello from test'\`;
exec(\`node dist/index.js\`, (err, stdout) => {
console.log(stdout);
});
"
# Or test directly with MCP
# The server will be loaded by Claude Code via ~/.claude.json config
๐ Performance Metrics
The server tracks and reports:
- Input tokens: Size of request
- Output tokens: Size of response
- Processing efficiency: Items processed per token
- Estimated savings: Percentage of tokens saved
Example response:
{
"token_metrics": {
"input_tokens": 250,
"output_tokens": 1500,
"estimated_savings_percent": 98.5
}
}
๐ค Contributing
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure token efficiency principles are followed
- Submit a pull request
๐ License
MIT License - see LICENSE file for details.
๐ Related Resources
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.