synplex-mcp
Server Details
Synplex provides inventory health scoring, landed cost calculation, and quick inventory diagnosis tools for Shopify merchants. It enables AI assistants to assess stock performance, calculate true product costs including duties and shipping, and instantly surface inventory issues.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 3 of 3 tools scored. Lowest: 3.4/5.
Each tool has a clearly distinct purpose: inventory_health_score calculates precise health metrics using actual data, landed_cost_calculator focuses on import cost analysis, and quick_inventory_diagnosis provides a low-friction estimate for lead qualification. The descriptions explicitly differentiate them, with quick_inventory_diagnosis even referencing inventory_health_score for precise alternatives, eliminating any overlap or confusion.
All tool names follow a consistent snake_case pattern with descriptive noun phrases (inventory_health_score, landed_cost_calculator, quick_inventory_diagnosis). The naming is uniform and predictable, making it easy for agents to understand the tool's function at a glance without any deviations in style or convention.
With 3 tools, the count is slightly lean but reasonable for the server's focus on Shopify merchant inventory and cost analysis. Each tool addresses a distinct aspect of the domain (health scoring, cost calculation, quick diagnosis), and while more tools could expand coverage, the current set is well-scoped and avoids bloat.
The tools cover key inventory and cost analysis workflows for Shopify merchants, including health scoring, landed cost calculation, and quick diagnostics. Minor gaps exist, such as lack of tools for updating inventory data or handling other merchant operations, but the core domain is adequately addressed, and agents can perform essential analyses without dead ends.
Available Tools
3 toolsinventory_health_scoreBInspect
Calculates a comprehensive inventory health score for a Shopify merchant. Returns GMROI, annual turnover, days inventory outstanding, dead stock exposure %, carrying cost leakage, a 0–100 composite score with per-dimension breakdown (gmroi/turnover/dead_stock), benchmark comparisons by industry segment, risk flags, plain-language diagnosis, and prioritised recommended actions.
| Name | Required | Description | Default |
|---|---|---|---|
| segment | Yes | ||
| annualRevenue | Yes | ||
| grossMarginPct | Yes | ||
| deadStockCostValue | Yes | ||
| targetWeeksOfCover | Yes | ||
| avgInventoryCostValue | Yes | ||
| annualCarryingCostRate | Yes | ||
| deadStockThresholdDays | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly indicates this is a calculation/analysis tool (implied read-only, non-destructive), and details the comprehensive output including scores, benchmarks, and recommendations. However, it lacks information on performance characteristics, error handling, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured as a single, information-dense sentence that front-loads the core purpose and then enumerates all output components. Every element serves a clear purpose with zero wasted words, making it easy for an agent to parse the tool's capabilities.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex calculation tool with 8 required parameters, 0% schema coverage, and no output schema, the description does an excellent job detailing the comprehensive output but fails to address the input parameters. The lack of parameter semantics creates a significant gap in contextual understanding despite the rich output description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 8 required parameters, the description provides no information about what the parameters mean or how they relate to the calculation. While it mentions output metrics like GMROI and dead stock exposure, it doesn't connect these to input parameters like 'grossMarginPct' or 'deadStockCostValue', leaving significant semantic gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('calculates', 'returns') and resources ('inventory health score for a Shopify merchant'), and distinguishes it from siblings by mentioning comprehensive metrics like GMROI, turnover, and dead stock exposure that aren't implied in sibling tool names like 'landed_cost_calculator' or 'quick_inventory_diagnosis'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the sibling tools ('landed_cost_calculator' and 'quick_inventory_diagnosis'), nor does it mention any prerequisites, exclusions, or alternative contexts for usage, leaving the agent to infer based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
landed_cost_calculatorAInspect
Calculates total landed cost per unit for a Shopify merchant importing goods. Applies 2026 regulatory surcharges (US Section 122 +10%, EU de minimis €3/item), incoterm responsibility adjustments (EXW/FOB/DDP), CIF insurance, duty and brokerage, and financial carry cost over lead time. Returns unitLandedCost, costUpliftRatio, marginErosionPct (only when unitSellingPrice is provided), full cost breakdown by layer, sensitivity drivers, risk warnings, and recommended actions. Pass baseDutyRate as a decimal (e.g. 0.12 for 12%) — surcharges are applied automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| incoterm | Yes | ||
| quantity | Yes | ||
| unitCost | Yes | ||
| destination | Yes | ||
| baseDutyRate | Yes | ||
| brokerageFee | Yes | ||
| leadTimeDays | Yes | ||
| insuranceRate | Yes | ||
| originCountry | Yes | ||
| originPortFees | Yes | ||
| freightCostTotal | Yes | ||
| unitSellingPrice | No | ||
| annualCostOfCapital | Yes | ||
| destinationPortFees | Yes | ||
| itemsPerConsignment | Yes | ||
| isMarketplaceCollected | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing multiple behavioral aspects: it describes the complex calculation logic (regulatory surcharges, incoterm adjustments, cost components), output structure (unitLandedCost, breakdowns, warnings), and specific formatting instructions for baseDutyRate. However, it doesn't mention error handling, performance characteristics, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with key information front-loaded (purpose and scope) followed by calculation details and output explanation. While comprehensive, every sentence adds value with no redundant information, though it could be slightly more concise in listing all output components.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 16-parameter calculation tool with no annotations or output schema, the description provides substantial context about calculation methodology, output structure, and parameter semantics. It adequately covers the tool's purpose and behavior, though additional details about error cases or performance expectations would make it fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 16 parameters, the description provides crucial semantic context that compensates fully. It explains what 'baseDutyRate' represents and how to format it, clarifies that surcharges are applied automatically, and mentions the conditional nature of 'marginErosionPct' output when 'unitSellingPrice' is provided, adding significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('calculates total landed cost per unit'), target user ('Shopify merchant importing goods'), and scope of calculation. It distinguishes this tool from inventory-focused siblings by focusing on import cost calculation rather than inventory analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for import cost calculation scenarios but doesn't explicitly state when to use this tool versus alternatives. No guidance is provided about prerequisites, limitations, or comparison with sibling tools, leaving usage context somewhat implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
quick_inventory_diagnosisAInspect
Low-friction inventory health estimate for Shopify merchants. Use this when the merchant doesn't have precise inventory figures — requires only monthly revenue, SKU count, and industry segment. Inventory value and dead stock are estimated from industry benchmarks; all assumptions are returned transparently. Returns a 0–100 health score, risk flags, plain-language diagnosis, and prioritised recommended actions. Ideal for AI-assisted lead qualification and first-contact diagnostics. For a precise score using actual inventory figures, use inventory_health_score instead.
| Name | Required | Description | Default |
|---|---|---|---|
| segment | Yes | Industry segment — used to select benchmark targets | |
| skuCount | Yes | Number of active SKUs | |
| grossMarginPct | No | Gross margin % (0–100). Uses industry default if not provided. | |
| monthlyRevenue | Yes | Monthly revenue in USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's an estimation tool using benchmarks, returns transparent assumptions, and outputs a health score with risk flags and recommendations. However, it doesn't mention potential limitations like accuracy concerns or rate limits, which would be helpful for a tool without annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. Every sentence adds value: explaining when to use it, what it returns, how it works, and differentiation from alternatives. There's no wasted text or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description does an excellent job explaining what the tool does, when to use it, and what it returns. It covers the estimation methodology and output components. The only minor gap is not explicitly describing the format of the returned data, though it lists the components.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some context by mentioning the three required inputs and that gross margin uses a default if not provided, but doesn't provide additional semantic meaning beyond what's in the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Low-friction inventory health estimate for Shopify merchants.' It specifies the action ('estimate'), resource ('inventory health'), and scope ('Shopify merchants'), and distinguishes it from the sibling tool 'inventory_health_score' by emphasizing it's for when precise figures aren't available.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when the merchant doesn't have precise inventory figures') and when to use an alternative ('For a precise score using actual inventory figures, use inventory_health_score instead'). It also mentions ideal use cases ('AI-assisted lead qualification and first-contact diagnostics').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!