
MCP Utility Tools
A collection of tools that enhance MCP-based workflows with caching, retry logic, batch operations, and rate limiting capabilities.
README
MCP Utility Tools
A collection of utility tools for the Model Context Protocol (MCP) that provide caching, retry logic, batch operations, and rate limiting capabilities to enhance any MCP-based workflow.
Features
- 🔄 Retry with Exponential Backoff - Automatically retry failed operations with configurable delays
- 💾 TTL-based Caching - Cache expensive operations with automatic expiration
- 🚀 Batch Operations - Process multiple operations in parallel with concurrency control
- 🚦 Rate Limiting - Prevent API abuse with sliding window rate limiting
- 🔍 Full TypeScript Support - Type-safe with comprehensive TypeScript definitions
Installation
npm install mcp-utility-tools
# or with yarn
yarn add mcp-utility-tools
# or with bun
bun add mcp-utility-tools
Quick Start
1. Add to Claude Desktop
Add the utility tools to your Claude Desktop configuration:
{
"mcpServers": {
"utility-tools": {
"command": "npx",
"args": ["mcp-utility-tools"]
}
}
}
2. Use with Claude
Once configured, Claude can use these tools to enhance any workflow:
# Check cache before expensive operation
cache_result = mcp_cache_get(key="api-response", namespace="github")
if not cache_result["found"]:
# Fetch data with retry
response = fetch_with_retry("https://api.github.com/user/repos")
# Cache for 5 minutes
mcp_cache_put(
key="api-response",
value=response,
ttl_seconds=300,
namespace="github"
)
Available Tools
🔄 retry_operation
Retry operations with exponential backoff and jitter.
{
"tool": "retry_operation",
"arguments": {
"operation_id": "unique-operation-id",
"operation_type": "http_request",
"operation_data": {
"url": "https://api.example.com/data",
"method": "GET"
},
"max_retries": 3,
"initial_delay_ms": 1000
}
}
Features:
- Tracks retry attempts across multiple calls
- Exponential backoff with configurable delays
- Optional jitter to prevent thundering herd
- Prevents duplicate retries for successful operations
💾 Cache Operations
cache_get
Retrieve values from cache with TTL support.
{
"tool": "cache_get",
"arguments": {
"key": "user-data-123",
"namespace": "users"
}
}
cache_put
Store values with automatic expiration.
{
"tool": "cache_put",
"arguments": {
"key": "user-data-123",
"value": { "name": "John", "role": "admin" },
"ttl_seconds": 300,
"namespace": "users"
}
}
Features:
- Namespace support to prevent key collisions
- Automatic cleanup of expired entries
- Configurable TTL (1 second to 24 hours)
- Memory-efficient storage
🚀 batch_operation
Process multiple operations with controlled concurrency.
{
"tool": "batch_operation",
"arguments": {
"operations": [
{ "id": "op1", "type": "fetch", "data": { "url": "/api/1" } },
{ "id": "op2", "type": "fetch", "data": { "url": "/api/2" } },
{ "id": "op3", "type": "fetch", "data": { "url": "/api/3" } }
],
"concurrency": 2,
"timeout_ms": 5000,
"continue_on_error": true,
"use_cache": true
}
}
Features:
- Configurable concurrency (1-20 operations)
- Per-operation timeout
- Continue or fail-fast on errors
- Optional result caching
- Maintains order of results
🚦 rate_limit_check
Implement sliding window rate limiting.
{
"tool": "rate_limit_check",
"arguments": {
"resource": "api.github.com",
"max_requests": 60,
"window_seconds": 60,
"increment": true
}
}
Features:
- Per-resource tracking
- Sliding window algorithm
- Automatic reset after time window
- Check without incrementing option
Integration Examples
With GitHub MCP Server
// Cache GitHub API responses
async function getRepositoryWithCache(owner: string, repo: string) {
const cacheKey = `github:${owner}/${repo}`;
// Check cache first
const cached = await mcp_cache_get({
key: cacheKey,
namespace: "github"
});
if (cached.found) {
return cached.value;
}
// Fetch with retry
const data = await retryableGitHubCall(owner, repo);
// Cache for 10 minutes
await mcp_cache_put({
key: cacheKey,
value: data,
ttl_seconds: 600,
namespace: "github"
});
return data;
}
With Slack MCP Server
// Rate-limited Slack notifications
async function sendSlackNotifications(messages: string[], channel: string) {
for (const message of messages) {
// Check rate limit
const canSend = await mcp_rate_limit_check({
resource: `slack:${channel}`,
max_requests: 10,
window_seconds: 60,
increment: true
});
if (!canSend.allowed) {
console.log(`Rate limited. Retry in ${canSend.reset_in_seconds}s`);
await sleep(canSend.reset_in_seconds * 1000);
}
await mcp_slack_post_message({
channel_id: channel,
text: message
});
}
}
Architecture
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ │ │ │ │ │
│ Claude/Client │────▶│ MCP Utility Tools│────▶│ Cache Storage │
│ │ │ │ │ (In-Memory) │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│ │
│ │
▼ ▼
┌─────────────────┐ ┌──────────────────┐
│ Other MCP │ │ Retry/Rate │
│ Servers │ │ Limit Tracking │
└─────────────────┘ └──────────────────┘
Development
# Clone the repository
git clone https://github.com/haasonsaas/mcp-utility-tools.git
cd mcp-utility-tools
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
# Run in development mode
npm run dev
Testing
Run the comprehensive test suite:
# Unit tests
npm test
# Integration tests with test harness
npm run test:integration
# Test with MCP Inspector
npx @modelcontextprotocol/inspector build/index-v2.js
Contributing
We welcome contributions! Please see our Contributing Guide for details.
Areas for Contribution
- 🔌 Storage Backends: Add Redis, SQLite support
- 🔧 New Tools: Circuit breakers, request deduplication
- 📊 Metrics: Add performance tracking and analytics
- 🌐 Examples: More integration examples with other MCP servers
License
MIT © Jonathan Haas
Acknowledgments
Built on top of the Model Context Protocol SDK by Anthropic.
<p align="center"> Made with ❤️ for the MCP community </p>
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.