Token-Efficient MCP Server

Token-Efficient MCP Server

Provides sandboxed code execution and data processing for CSVs and logs to achieve over 95% token savings. It enables secure multi-language execution and progressive tool disclosure to optimize LLM context usage.

Category
Visit Server

README

Token-Efficient MCP Server

A project-agnostic MCP (Model Context Protocol) server that provides 95%+ token savings through sandboxed data processing, progressive tool loading, and multi-language code execution.

๐Ÿš€ Key Features

Multi-Language Code Execution

Execute code in sandboxed environment:

  • Python, Bash, Node.js/JavaScript support
  • Critical for agent systems (initializer, coding, tester, verifier)
  • Run commands, tests, and validations with 98% token savings

Progressive Tool Disclosure

Load tools on-demand to reduce context usage:

  • Level 1: Tool names only (~100 tokens)
  • Level 2: Names + summaries (~2K tokens)
  • Level 3: Full definitions (only when needed)

Sandboxed Data Processing

Process data securely before returning to context:

  • CSV filtering and aggregation (99% savings)
  • Log file analysis (95% savings)
  • Code execution with output filtering
  • Token measurement and optimization

Project Agnostic

Works with any project that needs:

  • Multi-language code execution
  • Large dataset processing
  • Log analysis
  • Token optimization

๐Ÿ“ฆ Installation

# Clone the repository
git clone https://github.com/your-repo/token-efficient-mcp.git
cd token-efficient-mcp

# Install dependencies
npm install

# Build TypeScript
npm run build

โš™๏ธ Configuration

Add to your global ~/.claude.json:

{
  "mcpServers": {
    "token-efficient": {
      "command": "srt",
      "args": [
        "node",
        "/path/to/token-efficient-mcp/dist/index.js"
      ]
    }
  }
}

Note: The srt command provides OS-level sandboxing via sandbox-exec (macOS) or bubblewrap (Linux).

๐Ÿ› ๏ธ Available Tools

1. execute_code

Execute code in multiple languages with sandboxing.

// Run bash commands
execute_code({
  code: "npm test",
  language: "bash"
})

// Run Python scripts
execute_code({
  code: "import sys; print(sys.version)",
  language: "python"
})

// Run Node.js code
execute_code({
  code: "console.log('Hello from Node')",
  language: "node"
})

// Check health endpoint
execute_code({
  code: "curl -m 3 http://localhost:8000/api/health",
  language: "bash"
})

Supported Languages: python, bash, sh, node, javascript

2. list_token_efficient_tools

Discover available tools with progressive disclosure.

// Level 1: Names only (100 tokens)
list_token_efficient_tools({ level: "names_only" })

// Level 2: Summaries (2K tokens)
list_token_efficient_tools({ level: "summary" })

// Level 3: Full definitions
list_token_efficient_tools({ level: "full" })

4. process_csv

Process CSV files with filtering and aggregation.

// Example: Find expensive stocks
process_csv({
  file_path: "data/stocks.csv",
  filter_expr: "price > 100 and volume > 1000000",
  columns: ["symbol", "price", "volume", "change"],
  limit: 10,
  response_format: "summary"
})

5. process_logs

Filter and analyze log files efficiently.

// Example: Find all errors with context
process_logs({
  file_path: "logs/application.log",
  pattern: "ERROR|CRITICAL",
  context_lines: 2,
  limit: 50,
  response_format: "summary"
})

6. get_token_savings_report

Get optimization tips and savings potential.

๐Ÿ“Š Token Savings Examples

Code Execution

// Without execute_code: Multi-turn conversation
// Agent: "Should I run npm test?" โ†’ User: "Yes" โ†’ Run โ†’ Parse output
// Estimated: 5,000+ tokens across multiple turns

// With execute_code: Single call
execute_code({ code: "npm test", language: "bash" })
// Returns: { success: true, output: "Tests passed", exit_code: 0 }
// Result: 200 tokens (98% savings)

CSV Processing

// Without optimization: 200,000 tokens
// All 10,000 rows returned to context

// With token-efficient MCP: 2,000 tokens (99% savings)
// Only 100 filtered rows returned

Log Analysis

// Without optimization: 500,000 tokens
// All 100,000 log lines returned

// With token-efficient MCP: 5,000 tokens (99% savings)
// Only 500 matching lines with context returned

Tool Loading

// Traditional MCP: 150,000 tokens
// All tool definitions loaded at startup

// Token-efficient MCP: 2,000 tokens (98.7% savings)
// Tools loaded on-demand

๐Ÿ”’ Security

The server uses OS-level sandboxing via srt wrapper:

  • Filesystem isolation: Limited to temp directories for code execution
  • Network restrictions: No outbound connections by default
  • Process monitoring: Timeouts (1-300s) and resource limits
  • Multi-language support: Sandboxed Python, Bash, Node.js execution

๐Ÿงช Testing

# Build the project
npm run build

# Test execute_code tool
node -e "
const { exec } = require('child_process');
const code = \`echo 'Hello from test'\`;
exec(\`node dist/index.js\`, (err, stdout) => {
  console.log(stdout);
});
"

# Or test directly with MCP
# The server will be loaded by Claude Code via ~/.claude.json config

๐Ÿ“ˆ Performance Metrics

The server tracks and reports:

  • Input tokens: Size of request
  • Output tokens: Size of response
  • Processing efficiency: Items processed per token
  • Estimated savings: Percentage of tokens saved

Example response:

{
  "token_metrics": {
    "input_tokens": 250,
    "output_tokens": 1500,
    "estimated_savings_percent": 98.5
  }
}

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Ensure token efficiency principles are followed
  5. Submit a pull request

๐Ÿ“ License

MIT License - see LICENSE file for details.

๐Ÿ”— Related Resources

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured