Multi-Carrier Shipping API — powered by Secureship
Server Details
Secureship MCP gives AI assistants access to a multi-carrier shipping API covering rate comparison, label generation, package tracking, pickup scheduling, address book management, shipment history, customs documents, and more — across carriers like UPS, FedEx, Purolator, Canpar, and others. Browse 150+ live endpoint schemas, parameters, and auth details — always current, never stale.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolsGetAuthInfoGet Authentication InfoARead-onlyInspect
Get Secureship API authentication instructions. Call this FIRST before generating any code examples with authentication headers. Secureship uses X-API-KEY header authentication, NOT Bearer tokens.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations indicate readOnly/destructive hints, the description adds valuable behavioral context: it specifies the exact authentication mechanism (X-API-KEY header) and establishes a critical workflow constraint (must be called first before code generation). This goes beyond the safety profile provided by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences with zero waste. Front-loaded with the core action ('Get Secureship API authentication instructions'), followed by critical sequencing ('Call this FIRST'), then specific technical constraint ('X-API-KEY...NOT Bearer').
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's narrow scope (no parameters, read-only annotations, no output schema), the description adequately covers the essential domain: what it returns (auth instructions), when to use it (before code generation), and the critical auth mechanism detail. Minor gap: doesn't specify the return format (e.g., text vs schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Per calibration standards, 0 parameters warrants a baseline score of 4. The description correctly implies no inputs are needed to retrieve auth instructions, consistent with the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'Secureship API authentication instructions' using the specific verb 'Get' and resource 'authentication instructions'. It effectively distinguishes itself from sibling tools (GetEndpointDetail, ListEndpoints, SearchDocs) by focusing specifically on authentication setup rather than API functionality or documentation search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Call this FIRST before generating any code examples' providing clear sequencing guidance. The phrase 'NOT Bearer tokens' establishes when not to use standard assumptions, effectively guiding correct usage against common alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GetEndpointDetailGet Endpoint SchemaARead-onlyInspect
Get the full schema for a specific Secureship API endpoint — all parameters, request body fields, response format, and authentication requirements. Use after SearchDocs identifies the right endpoint. Authentication uses the X-API-KEY header.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Endpoint path, e.g. /v1/shipment/in-progress or /v1/carriers/rates | |
| method | Yes | HTTP method: GET, POST, PUT, PATCH, or DELETE |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds authentication mechanism ('X-API-KEY header') and clarifies return content scope ('all parameters, request body fields, response format'). It does not disclose rate limits, caching behavior, or error cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences: first defines purpose and output scope, second establishes workflow dependency on SearchDocs, third states authentication method. No redundancy or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter read-only tool with complete schema annotations and no output schema, the description adequately covers the tool's purpose, workflow position, and authentication requirements. Slight gap in not mentioning error handling for invalid endpoints, but otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with 'path' and 'method' fully documented. The description does not add additional parameter semantics (format constraints, valid path patterns, case sensitivity) beyond what the schema provides, warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states specific verb ('Get') and resource ('full schema for a specific Secureship API endpoint') and enumerates returned components (parameters, request body, response format, authentication). It distinguishes from sibling SearchDocs by implying this requires a specific endpoint already identified.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit workflow guidance: 'Use after SearchDocs identifies the right endpoint.' This clearly sequences the tool relative to a named sibling (SearchDocs), establishing when to invoke this tool in the workflow chain.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ListEndpointsList API EndpointsARead-onlyInspect
List all Secureship API endpoints, optionally filtered by category. Categories include: carriers, shipment, history, address-book, auth, account, users, invoices, pickups, documents, integrations, settings, dashboard. Use for discovery when you don't know which endpoint to search for.
| Name | Required | Description | Default |
|---|---|---|---|
| category | Yes | Optional category filter, e.g. 'carriers' or 'address-book'. Omit to list all endpoints. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, covering the safety profile. The description adds valuable behavioral context by enumerating all valid category values (carriers, shipment, etc.) and specifying this is for the Secureship API domain, though it omits details about return format or pagination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with purpose front-loaded. The category list is long but earns its place by documenting valid filter values not present in schema enums. No redundant verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a single-parameter discovery tool. While output schema is absent, the description adequately covers the input domain (category values) and use case (discovery). Annotations handle safety disclosures, allowing the description to focus on functional semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With schema coverage at 100%, the baseline is 3. The description adds substantial value by comprehensively listing all valid category values (carriers through dashboard) which the schema only hints at with 'e.g.' examples. It also clarifies the optional nature of filtering despite the schema marking the parameter required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'List' and resource 'Secureship API endpoints' with specific scope (all endpoints, optional category filtering). It distinguishes from sibling GetEndpointDetail by positioning this as a discovery tool for when you don't know specific endpoints, implying the sibling is for known endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Use for discovery when you don't know which endpoint to search for' provides clear context on when to invoke this tool versus alternatives like GetEndpointDetail. It lacks explicit 'when not to use' language or direct sibling comparisons, but the discovery framing effectively guides selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchDocsSearch API DocumentationARead-onlyInspect
Search Secureship API documentation. Use when you need to find endpoints for a specific task (e.g. 'create a shipment', 'get rates', 'address book'). Returns matching endpoints with method, path, summary, and tags. Follow up with GetEndpointDetail to get full parameter schemas. IMPORTANT: Secureship API uses the X-API-KEY header for authentication (NOT Bearer token). Pass your API key as: X-API-KEY: your-api-key
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Natural language or keyword query, e.g. 'create label', 'track package', 'address book' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only/safe operation. Description adds critical authentication details ('X-API-KEY header...NOT Bearer token') and discloses return structure ('method, path, summary, and tags') compensating for the missing output schema. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences logically ordered: purpose, usage guidance, output description, and auth requirements. Every sentence provides unique value. Auth note is critical and appropriately placed at the end. No redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter search tool without output schema, description fully compensates by detailing the return fields, explaining the two-step workflow (search then get detail), and specifying authentication requirements. Complete for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with the query parameter well-described ('Natural language or keyword query'). Description reinforces usage through examples in the usage section but does not add semantic meaning beyond what the schema already provides. Baseline 3 is appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it searches Secureship API documentation and returns matching endpoints. It explicitly distinguishes from sibling GetEndpointDetail by stating to 'Follow up with GetEndpointDetail to get full parameter schemas,' clarifying this tool provides overview/search while the sibling provides detail.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Use when you need to find endpoints for a specific task') with concrete examples (e.g., 'create a shipment', 'get rates'). Explicitly names the alternative/next-step tool (GetEndpointDetail) for when full schemas are needed, creating a clear decision path between tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!