DB Schenker Shipment Tracker MCP Server
An MCP (Model Context Protocol) server that tracks DB Schenker shipments by reference number, providing structured shipment information including sender/receiver details, package information, and complete tracking history.
Setup Instructions
Prerequisites
Node.js: Version 18 or higher
npm: Comes bundled with Node.js
Environment Setup
Clone or download this repository
git clone https://github.com/digitalxenon98/sendify-dbschenker-mcp cd sendify-dbschenker-mcpVerify Node.js installation
node --version # Should be v18 or higher npm --version
Build/Install Dependencies
Install all dependencies
npm installThis will install:
Runtime dependencies:
@modelcontextprotocol/sdk,zodDevelopment dependencies:
typescript,tsx,@types/node
Build the TypeScript project (optional, for production)
npm run buildThis compiles TypeScript to JavaScript in the
dist/directory.
How to Run the MCP Server
Development Mode
Run the server directly with TypeScript (no build required):
The server will start and communicate via stdio (standard input/output), which is the standard way MCP servers operate.
Production Mode
First, build the project:
npm run buildThen run the compiled JavaScript:
npm start
MCP Client Configuration
To use this MCP server with an MCP client (like Claude Desktop), add it to your MCP configuration:
For development, you can use tsx instead:
How to Test the Tool
Using an MCP Client
Start your MCP client (e.g., Claude Desktop) with the server configured
Call the tool with a reference number:
track_shipment(reference: "1806203236")
Example Reference Numbers
You can test with these reference numbers:
18062032361806290829180627370018062723301806271886
Expected Response Format
Success Response:
Error Response (Not Found):
Error Response (Rate Limited):
Manual Testing (Node.js)
You can also test the server manually by sending MCP protocol messages via stdio, though this requires understanding the MCP protocol format.
Rate Limiting & Reliability
The DB Schenker public API enforces rate limits to ensure fair usage and system stability. This implementation includes several mechanisms to handle rate limiting gracefully:
Automatic Retries: Failed requests due to rate limiting (HTTP 429) are automatically retried with exponential backoff, providing up to 3 retry attempts with increasing delays.
Exponential Backoff: Each retry waits progressively longer before attempting again, reducing the likelihood of hitting rate limits on subsequent attempts.
Response Caching: Successful API responses are cached in memory for 60 seconds, significantly reducing the number of API calls for repeated queries within the cache window.
Graceful Error Handling: When rate limits are encountered, the tool returns clear error messages with helpful hints, allowing users to understand the situation and retry when appropriate.
All HTTP 429 responses are handled transparently, and users will receive informative error messages if rate limits persist after all retry attempts are exhausted.