GlobKurier Shipping MCP
Server Details
Track shipments and search shipping products (DPD, InPost, DHL, FedEx, UPS, GLS) via GlobKurier API.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- GlobKurier-pl/mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsget_countriesAInspect
Fetch the complete list of countries supported by GlobKurier. Each country entry contains: 'id' (numeric country ID required by other tools), 'name' (country name), 'iso_code' (ISO 3166-1 alpha-2 or regional code), EU membership status, road transport availability, and postal code formats. ALWAYS call this tool first to resolve country IDs before calling search_products, get_product_addons, or get_search_url. Find the country by name or iso_code, then use its 'id' field value.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and effectively discloses what data is returned (country ID, name, ISO code, EU status, transport availability, postal formats). Since an output schema exists, full enumeration isn't required, though it could mention rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences total with zero waste: sentence 1 defines the action, sentence 2 details the returned data structure, and sentence 3 provides usage sequencing. Information is front-loaded and immediately actionable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description appropriately summarizes rather than duplicates return values. It establishes the critical prerequisite relationship with three sibling tools and explains the resolution workflow, making it complete for a dependency-resolution tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters and the schema coverage is 100% (empty object), which per guidelines sets a baseline of 4. No parameter documentation is required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Fetch the complete list of countries supported by GlobKurier'—a specific verb+resource combination that clearly defines the scope. It also distinguishes from siblings by identifying this as the ID resolution source for other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'ALWAYS call this tool first to resolve country IDs before calling search_products, get_product_addons, or get_search_url,' providing clear prerequisites and naming specific alternatives. It also includes the workflow instruction to find by name/iso_code then use the 'id' field.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_product_addonsAInspect
Get available addons for a specific shipping product. Returns addons like insurance, cash on delivery (COD), delivery to company/private person, non-standard items, and other supplements. Each addon includes pricing, category, requirements, and validation rules. Use this after selecting a product from search_products to see available options. IMPORTANT: Do NOT guess or assume country IDs. Always call get_countries first, find the matching country by name or ISO code, and use the value from its 'id' field as the country ID.
| Name | Required | Description | Default |
|---|---|---|---|
| width | Yes | ||
| height | Yes | ||
| length | Yes | ||
| weight | Yes | ||
| quantity | Yes | ||
| product_id | Yes | ||
| insurance_value | No | ||
| sender_post_code | Yes | ||
| sender_country_id | Yes | ||
| insurance_currency | No | PLN | |
| receiver_post_code | Yes | ||
| receiver_country_id | Yes | ||
| cash_on_delivery_value | No | ||
| cash_on_delivery_currency | No | PLN |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return content ('pricing, category, requirements, and validation rules') and gives concrete examples of addon types. Minor gap: doesn't explicitly state this is read-only/safe, though implied by 'Get'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear progression: purpose → return value details → workflow context → critical warning. Every sentence earns its place; 'IMPORTANT' warning appropriately emphasized for common error case without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 14 parameters (10 required) and output schema existence, description appropriately focuses on workflow prerequisites rather than return values. Captures the critical integration point (country ID sourcing) that would otherwise block usage. Minor gap: doesn't specify units for dimensions/weight.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, description must compensate. It provides critical semantics for country ID parameters ('Do NOT guess... use value from get_countries id field'), but leaves 12 other parameters (dimensions, weight, post codes) completely undocumented. Partial compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'addons for a specific shipping product'. Explicitly distinguishes from sibling 'search_products' via workflow instruction 'Use this after selecting a product from search_products', establishing correct tool sequence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use ('after selecting a product from search_products') and critical prerequisites ('Do NOT guess or assume country IDs. Always call get_countries first'). Clear workflow chain: get_countries → search_products → this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_search_urlAInspect
Generate a direct link to the GlobKurier search page with pre-filled shipment parameters. Use this after presenting a product offer to give the user a URL to complete the purchase. Optionally pass a productId to highlight a specific offer on the search page.
| Name | Required | Description | Default |
|---|---|---|---|
| width | Yes | ||
| height | Yes | ||
| length | Yes | ||
| weight | Yes | ||
| quantity | Yes | ||
| product_id | No | ||
| sender_country_id | Yes | ||
| receiver_country_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully clarifies that the tool generates a URL for external purchase completion (not an internal redirect) and mentions the highlighting behavior for specific offers. However, it omits details about URL expiration, idempotency, or whether the link is single-use.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero redundancy: purpose statement, workflow guidance, and optional parameter note. Information is front-loaded with the core action, making it immediately scannable for an agent determining tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (per context signals), the description appropriately omits return value details. However, for a complex 8-parameter tool with zero schema coverage, it fails to document critical implementation details like measurement units or the relationship between country IDs and the 'get_countries' sibling tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (no parameter descriptions), yet the description only explicitly mentions 'productId' and vaguely references 'shipment parameters' without mapping to the specific dimension fields (width, height, length, weight) or explaining critical semantics like units (cm? kg?) or valid ranges. This leaves 7 of 8 parameters effectively undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Generate') with clear resource ('direct link to the GlobKurier search page') and scope ('pre-filled shipment parameters'). It effectively distinguishes from sibling 'search_products' by emphasizing this creates a purchase completion URL rather than returning product data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance ('Use this after presenting a product offer') that establishes when to invoke the tool in the user journey. However, it lacks explicit 'when not to use' guidance or named alternatives for cases where the user wants to search rather than purchase.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_shipment_statusAInspect
Retrieve detailed shipment tracking status from GlobKurier API. Returns current status, complete tracking history with timestamps, sender/receiver addresses, and delivery confirmation. Supports both Polish and English language responses.
| Name | Required | Description | Default |
|---|---|---|---|
| language | No | ||
| order_number | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the external API dependency (GlobKurier) and bilingual response capability (Polish/English), but omits safety classification (read-only), rate limits, or authentication requirements that agents need for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: purpose with source, output details, and behavioral trait. Front-loaded with the core action, zero redundant words, and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema and existence of an output schema, the description provides sufficient context. It adequately covers the tool's functionality, though explicit mention of required authentication or error scenarios would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It explains the language parameter semantics implicitly via 'Supports both Polish and English language responses', but provides no guidance on order_number format, where to find it, or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Retrieve' with clear resource 'shipment tracking status' and identifies the source system 'GlobKurier API'. It clearly distinguishes from siblings (get_countries, search_products, etc.) which handle entirely different domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it doesn't explicitly name alternatives (none exist among siblings), the context is unambiguous: use this when you need shipment tracking information. It implies the primary use case through the detailed return value description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_productsAInspect
Search for available shipping products based on package dimensions, weight, quantity, and country locations. Returns products from multiple carriers (DPD, InPost, DHL, FedEx, UPS, GLS, etc.) grouped by delivery time (fast, superfast, noon, morning, standard). Includes pricing, delivery times, available addons, and carrier details. IMPORTANT: Do NOT guess or assume country IDs. Always call get_countries first, find the matching country by name or ISO code, and use the value from its 'id' field as the country ID.
| Name | Required | Description | Default |
|---|---|---|---|
| width | Yes | ||
| height | Yes | ||
| length | Yes | ||
| weight | Yes | ||
| quantity | Yes | ||
| sender_country_id | Yes | ||
| receiver_country_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description discloses important behavioral traits including that results are 'grouped by delivery time (fast, superfast, noon, morning, standard)' and lists specific supported carriers, explaining the organization of returned data. However, it omits whether the operation is read-only (implied by 'search' but not stated) or other constraints like rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three distinct functional components: the search capability, the return format details, and the critical workflow constraint. The 'IMPORTANT' flag appropriately front-loads urgency for the dependency warning without wasting words, and every sentence adds value beyond the structured schema fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (per context signals), the description adequately summarizes return values (pricing, delivery times, addons, carrier details) and compensates for zero parameter schema descriptions by embedding the get_countries dependency. However, the absence of unit specifications for dimensions and weight leaves a minor operational gap given the complete lack of schema descriptions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage (all seven parameters lack descriptions), and while the description mentions 'package dimensions, weight, quantity, and country locations' providing high-level conceptual mapping, it fails to specify critical semantics like measurement units (cm? kg?) or integer formats. The 'IMPORTANT' note about country IDs partially compensates by explaining the semantic source of those specific parameters, but does not fully address the unit gap for physical measurements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Search' with the resource 'shipping products' and clearly distinguishes from siblings by detailing the multi-carrier scope (DPD, DHL, FedEx, etc.) and delivery-time grouping logic. It explicitly identifies the six input categories (dimensions, weight, quantity, country locations) that map to the seven required parameters, clearly differentiating it from tools like get_shipment_status or get_countries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit workflow guidance stating 'Always call get_countries first' and warns 'Do NOT guess or assume country IDs,' directly naming the sibling tool and establishing a strict prerequisite. This creates an unambiguous decision tree for the agent regarding when to use this tool versus its dependency.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!