DB Schenker Shipment Tracker
Tracks DB Schenker shipments by reference number, providing detailed shipment information including sender/receiver details, package information, and complete tracking history with automatic rate limiting and caching.
README
DB Schenker Shipment Tracker MCP Server
An MCP (Model Context Protocol) server that tracks DB Schenker shipments by reference number, providing structured shipment information including sender/receiver details, package information, and complete tracking history. The DB Schenker public API is rate-limited. This implementation handles rate limits reliably and returns structured error messages when limits are encountered.
Setup Instructions
Prerequisites
- Node.js: Version 18 or higher
- npm: Comes bundled with Node.js
Environment Setup
-
Clone or download this repository
git clone https://github.com/digitalxenon98/sendify-dbschenker-mcp cd sendify-dbschenker-mcp -
Verify Node.js installation
node --version # Should be v18 or higher npm --version
Build/Install Dependencies
-
Install all dependencies
npm installThis will install:
- Runtime dependencies:
@modelcontextprotocol/sdk,zod - Development dependencies:
typescript,tsx,@types/node
- Runtime dependencies:
-
Build the TypeScript project (optional, for production)
npm run buildThis compiles TypeScript to JavaScript in the
dist/directory.
How to Run the MCP Server
Development Mode
Run the server directly with TypeScript (no build required):
npm run dev
The server will start and communicate via stdio (standard input/output), which is the standard way MCP servers operate.
Production Mode
-
First, build the project:
npm run build -
Then run the compiled JavaScript:
npm start
MCP Client Configuration
To use this MCP server with an MCP client (like Claude Desktop), add it to your MCP configuration:
{
"mcpServers": {
"db-schenker-tracker": {
"command": "node",
"args": ["/path/to/sendify-dbschenker-mcp/dist/server.js"]
}
}
}
For development, you can use tsx instead:
{
"mcpServers": {
"db-schenker-tracker": {
"command": "tsx",
"args": ["/path/to/sendify-dbschenker-mcp/src/server.ts"]
}
}
}
How to Test the Tool
Using an MCP Client
- Start your MCP client (e.g., Claude Desktop) with the server configured
- Call the tool with a reference number:
track_shipment(reference: "1806203236")
Example Reference Numbers
You can test with these reference numbers:
18062032361806290829180627370018062723301806271886
Expected Response Format
Success Response:
{
"ok": true,
"reference": "1806203236",
"shipment": {
"id": "...",
"stt": "...",
"transportMode": "LAND",
...
},
"sender": {...},
"receiver": {...},
"packageDetails": {...},
"trackingHistory": [...],
...
}
Error Response (Not Found):
{
"ok": false,
"error": "NOT_FOUND",
"message": "No shipment found for that reference number.",
"reference": "1806203236"
}
Error Response (Rate Limited):
{
"ok": false,
"error": "API_ERROR",
"message": "Failed to fetch shipment data from DB Schenker API.",
"reference": "1806203236",
"details": "HTTP 429 Too Many Requests...",
"hint": "DB Schenker API rate-limited the request. Please retry later."
}
Manual Testing (Node.js)
You can also test the server manually by sending MCP protocol messages via stdio, though this requires understanding the MCP protocol format.
Rate Limiting & Reliability
The DB Schenker public API enforces rate limits to ensure fair usage and system stability. This implementation includes several mechanisms to handle rate limiting gracefully:
-
Automatic Retries: Failed requests due to rate limiting (HTTP 429) are automatically retried with exponential backoff, providing up to 3 retry attempts with increasing delays.
-
Exponential Backoff: Each retry waits progressively longer before attempting again, reducing the likelihood of hitting rate limits on subsequent attempts.
-
Response Caching: Successful API responses are cached in memory for 60 seconds, significantly reducing the number of API calls for repeated queries within the cache window.
-
Graceful Error Handling: When rate limits are encountered, the tool returns clear error messages with helpful hints, allowing users to understand the situation and retry when appropriate.
All HTTP 429 responses are handled transparently, and users will receive informative error messages if rate limits persist after all retry attempts are exhausted.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.