Packrift Packaging
Server Details
Live Packrift catalog: search, pricing, inventory, packaging recommendations, checkout URLs.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 6 of 7 tools scored.
Each tool has a uniquely defined purpose: inventory checking, dimension-based packaging search, product search, pricing, product details, shipping estimates, and cart creation. There is no overlap or ambiguity.
All tool names follow a consistent verb_noun pattern using snake_case, such as check_inventory, create_cart_url, find_packaging_for_item, and get_pricing. The naming is predictable and uniform.
With 7 tools, the server is well-scoped for its packaging domain. Each tool covers a necessary step in the workflow, from product search through checkout, without being excessive or insufficient.
The tool set covers the full lifecycle: searching products, finding correct packaging by dimensions, checking inventory and pricing, estimating shipping, and creating a cart URL. No major gaps are evident for agent tasks.
Available Tools
7 toolscheck_inventoryARead-onlyInspect
Use to confirm a SKU is in stock before recommending it or building a cart. Input: variant_ids (numeric). Returns available count and in_stock boolean per variant. Live, never cached.
| Name | Required | Description | Default |
|---|---|---|---|
| variant_ids | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true (safe read) and openWorldHint=true (external changes). The description adds value by emphasizing 'Real-time' and 'Live, never cached,' informing the agent of freshness behavior and no caching side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no fluff. The first sentence states the core purpose, and the second adds key behavioral context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and annotations covering safety, the description covers the purpose and data freshness. However, it does not describe the output format (e.g., a map of ID to count), which would improve completeness. Still, the missing detail is minor.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description must compensate. It clarifies that 'variant_ids' accepts 'one or more' values, matching the minItems constraint, but adds no format or example. This is adequate but minimal beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'Real-time available inventory count for one or more variant ids,' specifying the verb (returns count) and resource (inventory for variant IDs). It distinguishes from siblings like get_pricing or search_products, which handle different concerns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clarifies that the tool provides live, uncached data, which implies use cases requiring freshness. However, it does not explicitly state when not to use it or mention alternatives; the context suffices given no sibling tool overlaps.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_cart_urlARead-onlyInspect
Final step: hand the user off to checkout. Inputs: items[{variant_id, qty}], optional discount_code. Returns a packrift.com/cart/... permalink with ?ref=mcp attribution and optional &discount=.
| Name | Required | Description | Default |
|---|---|---|---|
| ref | No | mcp | |
| items | Yes | ||
| discount_code | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, indicating no mutation. The description adds behavioral specifics: always appends '?ref=mcp' and optionally a discount code. This goes beyond annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. The first sentence defines the core purpose, the second adds crucial behavioral details. Efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool lacks an output schema, and the description does not specify the return format. For a URL-builder, stating that it returns a string URL would complete the picture. Without this, the agent must infer the output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description must provide parameter meaning. It does so by referencing 'variants and quantities' (items), '?ref=mcp' (ref), and 'discount_code'. All three parameters are implicitly covered, adding essential context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'builds a Shopify cart permalink for given variants and quantities', using a specific verb and resource. This distinguishes it from sibling tools (e.g., check_inventory, get_pricing) which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for creating cart URLs but provides no explicit when-to-use or when-not-to-use guidance. No alternatives are mentioned, though the sibling tools are clearly different in function.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_packaging_for_itemARead-onlyInspect
Use when the user has an item's L/W/D and needs the right box or mailer (also: box-vs-mailer, Uline-by-size). Inputs: L/W/D in, weight lb, use_case (mailer|box|fragile|apparel|ecommerce). Returns 5 SKUs ranked by fit with price, stock, URL.
| Name | Required | Description | Default |
|---|---|---|---|
| use_case | Yes | ||
| item_depth_in | Yes | ||
| item_width_in | Yes | ||
| item_length_in | Yes | ||
| item_weight_lb | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description aligns with readOnlyHint and openWorldHint annotations. It adds behavioral details: returns 5 SKUs ranked by fit with price, stock, URL. No contradictions, but could elaborate on openWorld implications (e.g., results may vary).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is three sentences, front-loaded with purpose and inputs, followed by outputs. Every sentence adds essential information with no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately explains returns (5 SKUs, ranked, fields). Inputs and usage are clear. Lacks examples of output format or edge cases, but sufficient for a moderately complex tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, but description lists all inputs ('L/W/D in, weight lb, use_case') and enumerates use_case options in parentheses. This adds context beyond the schema's field names, helping agents understand the measurement units and purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool's purpose: 'find the right box or mailer' given item dimensions and weight. It clearly distinguishes itself from sibling tools like check_inventory or get_pricing by focusing on packaging recommendations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides clear when-to-use guidance: 'Use when the user has an item's L/W/D and needs the right box or mailer.' It includes example use cases (box-vs-mailer, Uline-by-size), but does not explicitly exclude scenarios or mention alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricingARead-onlyInspect
Use to confirm live unit price and line total for variants about to go in a cart. Inputs: variant_ids (numeric), quantity. Returns unit_price, currency, available_quantity, line_total. Never cached.
| Name | Required | Description | Default |
|---|---|---|---|
| quantity | No | ||
| variant_ids | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds the valuable behavioral context 'Live, never cached', which is beyond the annotations. No contradictions found, though it could mention result format or error handling for missing variant IDs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—one sentence of 11 words—yet front-loads the key purpose and the important 'live, never cached' property. Every word is necessary, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two straightforward parameters and no output schema, the description covers the core purpose and freshness. However, it could be slightly more complete by noting what happens with invalid variant_ids or the format of the returned price/quantity (e.g., 'Returns a map of variant ID to price and quantity').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'variant_ids' but does not explain the 'quantity' parameter (default 1, integer) or its role. The description adds only partial meaning, leaving the user to infer parameter purpose from the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'Real-time price and available quantity for one or more variant ids', which distinguishes it from siblings like 'get_product' (likely full product details) and 'check_inventory' (possibly stock levels). It uses specific verbs and resources, and the 'Live, never cached' adds clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for real-time pricing needs ('Live, never cached') but does not explicitly state when to use this tool over alternatives, nor does it provide exclusion criteria or mention of sibling tools like 'get_product' or 'check_inventory'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_productARead-onlyInspect
Use after find_packaging_for_item or search_products to pull full detail for a handle: all variants, SKUs, dimensions, weight, stock. Input: handle. Call before building a cart to map qty to the right variant.
| Name | Required | Description | Default |
|---|---|---|---|
| handle | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true, but the description adds value by specifying the exact data fields returned (variants, dimensions, weight, inventory). This provides context beyond the annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is concise, front-loaded with the action and resource, and includes essential product details. Every word contributes to clarity with no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description provides a reasonable overview of the output content. However, it lacks information on the result structure, error handling, or any pagination, leaving some ambiguity for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage for the 'handle' parameter. The description only mentions 'by handle' without explaining what a handle is, its format, or examples. More detail is needed to help the agent provide a valid input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full product detail by handle, listing specific fields like variants, dimensions, weight, and inventory. It distinguishes from sibling tools such as search_products (search) and get_pricing (pricing) by focusing on a single product's comprehensive details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description does not mention prerequisites (e.g., needing a handle from search_products) or scenarios where other tools (like check_inventory) might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_shipping_estimateARead-onlyInspect
Use when the user asks shipping cost to a ZIP for a chosen cart. Inputs: destination_postal_code, country (US|CA), items[{variant_id, qty}]. Returns carrier rate options with price and currency.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| country | Yes | ||
| destination_postal_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint, so the description's statement about returning rates adds little beyond that. It mentions the Shopify API, which hints at authentication needs, but does not elaborate on rate limits or other behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the core action, and includes a relevant implementation detail. Every word is valuable, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters and no output schema, the description covers the main inputs and outputs. Missing details include the enum constraint (only US/CA) and the minimum item count, but overall it is nearly complete for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description partially compensates by mentioning 'destination postal code', 'country', and 'cart of variants and quantities'. However, it does not specify parameter types, defaults, or constraints like minItems, leaving some interpretation to the agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns shipping rate options given a destination postal code, country, and items. It distinguishes itself from sibling tools like get_pricing or check_inventory by focusing on shipping estimates, and mentions the underlying Shopify API, leaving no ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (to get shipping rates for a cart), but it does not explicitly state when not to use it or provide alternative tools. The context is clear, but exclusions or comparisons are absent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_productsARead-onlyInspect
Use when the user names a category by keyword (e.g. 'kraft tape', 'bubble mailer', 'starter kit') with no dimensions. For dimension-based fit, prefer find_packaging_for_item. Returns products with price, stock, URL.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | Free-text search; matches title, vendor, type, tags. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds that results are limited by the 'limit' parameter, which addresses the open-world behavior. It does not mention rate limits, authentication, or what happens with no results, but the annotations cover safety. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that concisely states the action, the resource, and the output. Every element is essential. No redundant phrases. Well front-loaded with the verb and noun.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with only two parameters and no output schema, the description covers the basic functionality and return fields. It lacks details on pagination, sorting, or ordering, but the openWorldHint and limit parameter mitigate that. Adequate but could mention sorting or ordering behavior for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50%: 'query' is described in schema, 'limit' is not. The description adds context for 'limit' by stating 'Returns up to `limit` products', which clarifies its role. For 'query', the schema already specifies it matches title, vendor, type, tags, so the description does not add much. Overall, the description enhances the understanding of the limit parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search', the resource 'Packrift catalog', and specifies the keyword-based interaction. It lists the return fields (price range, stock state, etc.), which distinguishes it from sibling tools like 'get_product' that retrieve a single product or 'check_inventory' that checks stock. The purpose is unambiguous and well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need to search products by keyword, but it does not explicitly state when not to use this tool or mention alternatives such as 'get_product' for exact product retrieval or 'check_inventory' for stock-only queries. No exclusions or prerequisites are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!