Skip to main content
Glama

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools map clearly to distinct lifecycle stages (discovery → import → staging → rules → push → live). While there are multiple 'product_' prefixed tools, their purposes (import, preview, update_rules, delete, visibility) are differentiated by specific actions and detailed descriptions. The boundary between import_list (staging browser) and my_products (live store) is explicitly defined.

    Naming Consistency3/5

    Mixed naming patterns reduce predictability: most use noun_verb (product_import, store_push, rules_validate), but find_product uses verb_noun, import_list uses noun_noun, and my_products uses adjective_noun. While consistently snake_cased with dsers_ prefix, the alternating resource-first vs action-first conventions require careful reading to distinguish.

    Tool Count5/5

    Twelve tools appropriately cover the dropshipping workflow without bloat. The set includes discovery, import, staging management, rule validation, visibility control, pushing, and status checking—each earning its place. No redundant tools or missing critical intermediates for the DSers-specific scope.

    Completeness4/5

    Covers the full CRUD lifecycle for DSers staging (import, update rules, preview, delete, push) and provides store discovery and job monitoring. Minor gaps exist: no explicit job listing/cancellation, and no update capability for already-pushed products (delegated to Shopify/Wix as noted). The core AliExpress-to-store pipeline is fully functional.

  • Average 4.6/5 across 12 of 12 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.0.3

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 12 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare read-only/idempotent safety, while the description adds crucial behavioral context: the state machine lifecycle and the exact return value structure (job_id, status, target_store, etc.) since no output schema exists. It could be elevated to 5 by mentioning polling frequency recommendations or rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three tightly constructed sentences each deliver distinct value: purpose statement, behavioral lifecycle, and return value documentation. No redundancy or filler text. Information is front-loaded with the action verb.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple read-only tool with one parameter and comprehensive annotations, the description is complete. It compensates for the missing output schema by inline-documenting return fields, providing sufficient information for successful invocation and result handling.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage (job_id fully documented in the schema referencing source tools), the description appropriately relies on the structured schema. No additional parameter semantics are needed in the text, meeting the baseline for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Check') and clear resource ('current status of an import or push job'), distinguishing it from sibling tools like dsers_product_import (which creates jobs) and dsers_store_push (which initiates pushes). The scope is precisely bounded to status tracking.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The status lifecycle ('preview_ready → push_requested → completed or failed') implies a polling pattern and sequential usage context. However, it lacks explicit guidance such as 'Use this after calling dsers_product_import' or warnings about when not to use it (e.g., not for searching products).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations cover read-only/destructive/idempotent status, so the description appropriately focuses on adding pagination behavior ('Use search_after from the response') and response structure ('Results include import_url'). It clarifies the data source (AliExpress catalog) but doesn't mention rate limits or authentication requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences with strong information density and front-loading. Minor redundancy exists in mentioning 'import_url' twice (sentences 2 and 4), but each sentence serves a distinct purpose: search capability, return value, pagination, and workflow continuation.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 8 parameters and no output schema in the context, the description compensates effectively by explaining key response fields (import_url, search_after) and the complete workflow lifecycle from search to store push. It provides sufficient context for an agent to use the tool correctly without additional documentation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the mutual exclusivity of keyword vs image_url and mentions search_after for pagination, but doesn't add syntax details, format constraints, or semantic explanations beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Search'), clearly identifies the resource ('DSers product pool (AliExpress supplier catalog)'), and specifies the dual search modalities ('keyword or by image URL'). It distinguishes itself from sibling tools by focusing on the discovery phase versus import/push operations.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly maps the tool's position in the workflow chain by naming three sibling tools: 'directly imported via dsers_product_import', 'use dsers_import_list to browse', and 'dsers_store_push to push'. This provides clear when-to-use guidance relative to alternatives.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations indicate idempotent, non-destructive writes; the description adds crucial workflow context and discloses return values ('job_id, status, visibility_mode') not present in annotations or structured fields. Could explicitly mention idempotency to fully align with annotations, but 'switch between' implies reversibility.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste: purpose (sentence 1), workflow positioning (sentence 2), and return values (sentence 3). Front-loaded with the core action and efficiently structured for an agent parsing the description.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a 2-parameter workflow tool with robust schema coverage and annotations. The description compensates for the missing output schema by listing return values. Would be perfect with explicit mention of idempotency or retry-safety for the workflow context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already documents parameters thoroughly including risk warnings ('SAFE' vs 'RISK'). The description maps 'draft' and 'published' concepts to the visibility_mode parameter, but this adds minimal semantic value beyond the comprehensive schema descriptions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states a specific verb ('Change') with clear resource ('visibility mode of a prepared job') and scope ('before pushing it to the store'). It effectively distinguishes this toggle from sibling import/push tools by positioning it as an intermediate workflow step.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit temporal guidance ('Call this between dsers_product_import and dsers_store_push') and names specific siblings to establish workflow order. Clarifies the binary choice ('switch between draft and published') and implies this is optional preprocessing before the final push.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations cover safety profile (readOnlyHint, destructiveHint), the description adds crucial behavioral context: it discloses the three-component return structure (effective_rules_snapshot, warnings, errors) and explains what each contains (adjustments made vs blocking issues). The 'normalize' behavior is also disclosed.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three efficient segments: action statement, usage context, and structured return documentation. Every clause provides distinct value. No redundant text or tautology.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Compensates well for missing output schema by documenting return values (effective_rules_snapshot, warnings, errors) and their meanings. Establishes clear relationship with sibling tool dsers_product_import. Could mention idempotency explicitly, though covered by annotations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with detailed parameter descriptions (including JSON structure examples and pricing modes). The description references 'rules object' and rule types (pricing, content, images) aligning with schema but does not add semantic information beyond what the schema already provides. Baseline 3 appropriate.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verbs ('Check and normalize') and resource ('rules object') with clear scope ('against provider's capabilities'). Explicitly distinguishes from sibling dsers_product_import by positioning itself as the pre-import validation step.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance: use this 'before importing' and specifically references dsers_product_import as the subsequent step requiring error-free validation. Explains the value proposition ('see exactly which ones will be applied').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations declare readOnly/idempotent hints, the description adds critical behavioral context: it decodes specific response flags ('ae_expired' vs 'plan_issue' severity), explains that auth expiration doesn't block imports, and provides workflow constraints ('do NOT retry') that augment the idempotency annotation. Could improve by mentioning cache duration or rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense and well-structured: opens with purpose, explains response structure, details specific flags (ae_expired, plan_issue), and closes with explicit workflow instructions. Every sentence delivers actionable information; no filler or tautology.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of an output schema, the description adequately compensates by documenting the return structure (stores with nested fields, rules categories) and explaining domain-specific status flags. It successfully establishes the tool's role as the foundation for sibling operations. Minor gap: no mention of error handling or rate limiting.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description mentions 'Use the id or name from this response in later calls,' which implicitly validates the parameter's purpose, but does not explicitly discuss the 'target_store' filter behavior or provide examples beyond what the schema already documents.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Retrieve') and clearly identifies the resources (stores and supported rules). It explicitly distinguishes this tool as a prerequisite for 'all subsequent operations,' clearly positioning it relative to its siblings (import, preview, push, etc.).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit sequencing guidance ('Call this first'), explains the dependency chain (response contains IDs needed by subsequent operations), and gives clear post-call instructions ('proceed to the next step... do NOT retry discover'). It effectively maps the tool's position in the workflow.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations declare readOnly/idempotent hints, the description adds substantial behavioral context: details the response structure (price_summary object, options array format), explains truncation behavior (values truncated to 10), clarifies field semantics (sell_price vs cost vs compare_at_price), and notes active_rules is always present. Compensates well for missing output schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense with zero waste. Front-loaded purpose statement followed by mode comparison, field definitions, and specific usage guidance. Every sentence conveys critical information about response structure, parameter interaction, or usage context. Excellent structure for a complex tool with multiple modes.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complex nested response structure and lack of output schema, the description comprehensively documents return values: details price_summary object fields, variant table columns for each mode, options array structure with truncation behavior, and active_rules. Adequately covers the tool's behavioral surface area.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (baseline 3). Description adds significant semantic value: explains 'compact' is 'lightweight' while 'full' includes cost/supplier data, clarifies that show_all_options should be used before applying option_edits (workflow context), and maps variant_detail values to specific column outputs beyond the schema's basic description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states 'Reload preview for an import job' with specific verbs and resource. It distinguishes from sibling dsers_product_import by emphasizing this is a reload/preview operation on an existing job (referenced via job_id), not the initial import creation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit guidance: 'Use variant_detail='full' when agent needs compare_at or cost columns' and hints at workflow with 'set show_all_options=true for full list'. Could be improved by explicitly stating when to prefer compact vs full mode or mentioning this should be used after import creation but before final confirmation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior5/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Exceeds annotations by detailing critical merge semantics (incremental vs full replacement), clearing mechanisms (empty string/null), and safety warnings ('DELETE all variants with that value'). Discloses response format ('compact preview with active_rules') absent from output schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense with zero waste—every clause explains critical behavior (merge logic, deletion warnings, response format). Slight deduction for monolithic paragraph structure that could benefit from formatting breaks between rule families.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for a high-complexity tool (13 params, nested JSON structures). Compensates for missing output schema by describing return value. Covers edge cases (clearing fields, removing families, variant deletion) essential for correct invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 100% schema coverage, adds substantial value for complex rules_json parameter by documenting JSON structure, merge behavior, and specific option_edits action syntax with examples (rename_option, remove_value, etc.). Explains interaction patterns between rule families.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Update' with clear resource 'pricing, content, images, or variant rules' on 'already-imported product' effectively distinguishes from sibling tools like dsers_product_import (create) and dsers_product_preview (view).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implicitly establishes prerequisite via 'already-imported product' and job_id requirement, indicating this follows dsers_product_import. Lacks explicit naming of alternatives (e.g., when to use dsers_rules_validate instead), but provides sufficient context for correct sequencing.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior5/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Excellent disclosure beyond annotations: details automatic validation checks (pricing/stock thresholds), distinguishes blocked (hard stop) vs warnings (soft alerts), explains response structure including per-job return fields, and clarifies external system constraints (Shopify/Wix). Annotations indicate destructive/openWorld; description explains exactly what gets mutated and how safety mechanisms work.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Lengthy but justified by complexity of destructive operation with three distinct modes and critical safety constraints. Well-structured with clear 'SAFETY' and 'SAFETY RESPONSE' headers. Front-loaded with validation warnings appropriate for destructiveHint=true. Every section serves agent decision-making.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for a complex 7-parameter destructive tool. Compensates for missing output schema by detailing response structure (blocked/warnings arrays, per-job return fields). Covers error handling, success states, and external system behavior (Shopify/Wix). Addresses edge cases (zero inventory, below-cost pricing) required for safe invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage, baseline is 3. Description adds significant value by explaining parameter interactions: job_ids_json takes priority over job_id, target_stores_json enables multi-store pushes vs single target_store, and force_push requires explicit user confirmation. Explains visibility_mode safety implications ('RISK: confirm pricing/inventory').

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description clearly states the tool pushes 'prepared import drafts' to 'Shopify or Wix store(s)' using specific verbs and resources. It distinguishes from siblings like dsers_product_import (which creates drafts) by focusing on the push action to external stores.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly documents three distinct operation modes (single, batch, multi-store) with specific parameter combinations. Provides clear safety guidance for force_push ('ONLY after explaining the risk'). Could improve by explicitly naming dsers_product_import as the prerequisite tool, though job_id references imply this workflow.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare read-only/idempotent safety profile. Description adds critical behavioral context: field semantics (no_markup logic, low_stock_warning threshold), return payload structure (lists all 11 fields), and crucial API quirk 'DSers API returns a minimum of 20 items per page. If page_size < 20, results are truncated client-side' which affects pagination behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Efficiently structured: purpose → return fields → field semantics → related tool workflows → sibling differentiation → behavioral note. Every sentence conveys unique information; no repetition of schema defaults or annotation values.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    No output schema exists, but description compensates by exhaustively listing all return fields (import_item_id through tags) and explaining business logic (no_markup, low_stock_warning). Complex domain (import staging, variants, push status) is fully contextualized with sibling relationships.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (baseline 3). Description adds value beyond schema by explaining the effective minimum constraint: despite schema allowing page_size=1, the note clarifies that DSers returns minimum 20 items and truncation happens client-side, which is essential for correct pagination handling.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Opens with specific verb 'Browse' + resource 'DSers import list' + qualifier 'with enriched variant data'. Explicitly distinguishes from siblings by defining this as 'the staging area before push' versus 'products already on your store' (dsers_my_products) and 'search for new products' (dsers_find_product).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use: 'This is the staging area before push'. Names specific alternatives: 'To see products already on your store, use dsers_my_products' and 'To search for new products, use dsers_find_product'. Also specifies related workflows: 'Use import_item_id with dsers_product_preview for full details or dsers_product_delete to remove'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations cover safety profile (readOnly, destructive, idempotent). Description adds significant behavioral context: return field structure (compensating for no output schema), price conversion logic (cents to dollars), and pagination quirk (minimum 20 items, client-side truncation).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Every sentence earns its place: purpose, prerequisite, return format, price conversion, scope clarification, alternative reference, pagination note. No filler. Front-loaded with core purpose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for tool complexity. Compensates for missing output schema by documenting all return fields. Covers prerequisites, sibling differentiation, and pagination behavior. With annotations present, description provides complete operational picture.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 100% schema coverage, description adds valuable context: store_id provenance (from dsers_store_discover) and page_size behavioral constraint (truncation if < 20). Elevates above baseline 3.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description opens with specific verb 'Browse' + resource 'products already pushed to a store', clearly distinguishing from siblings. Explicitly contrasts with dsers_import_list ('waiting to be pushed') and links to dsers_store_discover for prerequisites.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit prerequisite ('Requires store_id from dsers_store_discover') and clear alternative tool reference ('To see products waiting to be pushed, use dsers_import_list'). States exact scope (products 'already pushed').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Beyond annotations (mutation, non-destructive), description adds critical behavioral context: draft/import list workflow, automatic deduplication for re-imports, deprecation of job_id parameter, and response size expectations (~100 tokens per product in summary mode).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense but highly structured with capitalized section headers (EXPIRED/LOST JOB_ID, SINGLE RESPONSE, BATCH RESPONSE). Every sentence conveys distinct operational guidance; no filler despite covering 18 parameters and multiple modes.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Extremely complete for a complex 18-parameter tool. Compensates for missing output schema by detailing response formats (compact preview vs summary vs full), covers edge cases (expired job_id, multi-store prerequisites), and explains all enum behaviors.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage (baseline 3), description adds value by explaining the mutually exclusive parameter patterns ('source_url (single) or source_urls_json (batch)'), the dual rules approach ('rules_json or flat params'), and conceptual groupings that clarify when to use which parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Opens with specific verb+resource ('Import product(s) from supplier URL(s) into the DSers import list'), explicitly lists supported platforms (AliExpress, Alibaba, Accio), and distinguishes from siblings (dsers_product_update_rules, dsers_product_preview).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when-not-to-use alternatives ('To UPDATE rules on an existing import, use dsers_product_update_rules instead'), explains deduplication behavior for expired jobs, and warns about visibility_mode risks ('RISK: verify pricing and inventory first').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior5/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations mark this as destructive, the description adds critical behavioral details: irreversibility warning, the specific two-phase confirmation protocol (returns prompt without confirm=true), business impact (loss of supplier mapping), and exact scope boundaries. No contradictions with annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Excellent structure with clear section headers (IRREVERSIBLE, SCOPE, BUSINESS CONTEXT, AGENT PROTOCOL). Every sentence serves a purpose—warning of permanence, clarifying scope limitations, explaining business consequences, or prescribing agent workflow. Despite length, no waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a destructive, irreversible operation with a two-phase confirmation flow, the description is comprehensive. It covers the action, limitations, side effects, recovery implications, and required agent behavior. No output schema exists, but the description adequately explains the confirmation prompt response behavior.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage, the baseline is 3. The description adds value beyond the schema by specifying the agent protocol for handling confirm (show product title, get explicit user consent) and emphasizing that the confirmation prompt must be shown to the user. Could explain import_item_id acquisition more prominently.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description explicitly states the specific action (permanently delete), resource (product from DSers import list), and scope limitations (import list only, not Shopify/Wix stores). It clearly distinguishes from sibling tools like dsers_store_push by specifying this only affects the pre-push staging area.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-not-to-use guidance ('Products already pushed to Shopify/Wix stores are NOT affected') and directs to alternatives ('use the Shopify/Wix admin directly'). Also clearly documents the two-phase confirmation flow required before execution.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

dsers-mcp-product MCP server

Copy to your README.md:

Score Badge

dsers-mcp-product MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lofder/dsers-mcp-product'

If you have feedback or need assistance with the MCP directory API, please join our Discord server