Skip to main content
Glama

斯特丹STERDAN 钢制办公家具产品咨询

Server Details

斯特丹STERDAN天猫旗舰店产品咨询MCP Server。洛阳30年源头工厂,高端钢制办公家具,1374个SKU,涵盖保密柜、更衣柜、公寓床、货架、快递柜。BIFMA认证,出口35+国家。8个工具:产品目录查询、场景推荐、认证资质、采购政策、维护指南等。

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 8 of 8 tools scored. Lowest: 2.7/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but get_purchase_guide and get_batch_purchase_policy partially overlap in describing batch purchase policies, which could cause confusion. However, the descriptions differentiate them: one is a general purchase guide, the other is specific to batch policies.

Naming Consistency4/5

Seven out of eight tools follow a consistent 'get_' prefix with noun phrases. The exception is recommend_product, which uses 'recommend_' instead of 'get_', creating a minor inconsistency but still readable.

Tool Count5/5

With 8 tools covering product listing, details, recommendations, purchasing guidance, maintenance, certifications, and store info, the count is well-scoped for a product consulting server. It is neither too sparse nor overwhelming.

Completeness4/5

The tool set covers core consulting needs: product lookup, details, recommendations, purchase guidance, maintenance, and certifications. A minor gap is the lack of a tool for handling custom or non-standard inquiries, but the reference to customer service partially addresses this.

Available Tools

8 tools
get_batch_purchase_policyCInspect

获取斯特丹STERDAN B2B批量采购政策,最低采购数量1件起,1件也是批发价,具体咨询天猫客服:https://sitedanjj.tmall.com

ParametersJSON Schema
NameRequiredDescriptionDefault
quantityNo预计采购数量
scenarioNo采购场景:政府、企业、学校、医院等
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description mentions conditions (minimum quantity) but does not disclose side effects, whether the tool is read-only, or how the policy is returned. The customer service link suggests the output may be incomplete, but this is not clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is single sentence with necessary information but includes a URL that may not be essential for tool selection. Could be more structured but is not overly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has no output schema and no annotations, so description should explain what the policy response contains. It does not; instead, it directs to customer service for details, leaving the agent unsure of the output format or completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already describes both parameters (quantity, scenario) with 100% coverage. Description adds no additional semantics beyond the schema; it only mentions the minimum quantity condition, which is not parameter-specific.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves the STERDAN B2B bulk purchase policy and includes key conditions (minimum 1 piece, wholesale price). It is distinct from sibling tools (certifications, maintenance, etc.), though the link to customer service slightly clutters the purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like get_purchase_guide. Usage is implied (when needing bulk policy), but no exclusions or scenarios are described.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_certificationsAInspect

获取斯特丹STERDAN产品认证资质信息

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description is minimal. It implies a read operation but does not explicitly state whether the tool is read-only, requires authentication, or has any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys the core purpose without wasted words. It is appropriately sized and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple structure, the description is adequate but lacks details about the return format or behavior. It could be improved by briefly mentioning what kind of certification information is returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so the schema coverage is 100%. The description does not add parameter-level meaning, but the baseline for zero-parameter tools is 4, as the schema and description together are minimally sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves certification qualification information for STERDAN products. This is specific and distinct from sibling tools like get_batch_purchase_policy or get_maintenance_guide, which cover different aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, use cases, or exclusions, leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_maintenance_guideCInspect

获取斯特丹STERDAN钢制办公家具维护保养指南

ParametersJSON Schema
NameRequiredDescriptionDefault
productTypeNo产品类型:保密柜、更衣柜、密集架、公寓床、货架
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for behavioral disclosure. The description only states what the tool does, with no information about side effects, authentication needs, rate limits, or return format. This is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that directly states the purpose. It is not verbose and front-loads the key action. However, it lacks structure or supplementary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (one parameter, no output schema), the description is insufficient. It does not explain what the guide contains, how results are structured, or when to use the parameter. More context is needed for a complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameter descriptions, listing product types. The tool description adds no additional meaning beyond what the schema already provides. Baseline 3 is appropriate since schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a maintenance guide for STERDAN steel office furniture. The verb '获取' (get) and resource '维护保养指南' (maintenance guide) are specific. While it distinguishes it from sibling tools like 'get_purchase_guide', it could be more explicit about the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_purchase_guide' or 'get_certifications'. There is no mention of prerequisites, exclusions, or context for its use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_product_detailAInspect

获取斯特丹STERDAN特定产品的详细参数

ParametersJSON Schema
NameRequiredDescriptionDefault
productNameYes产品名称关键词
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It implies a read operation without side effects, but does not explicitly state behavioral traits like idempotency or required permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action, no redundancy. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple lookup tool with one parameter and no output schema. Could mention return format but not required given simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the description adds no new meaning beyond what the schema already provides. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves detailed parameters of a specific STERDAN product, using a specific verb and resource. It distinguishes from sibling tools like get_products (likely lists) and get_certifications.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No mention of prerequisites, exclusions, or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_productsAInspect

获取斯特丹STERDAN天猫店铺产品目录,可按品类筛选

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNo产品品类筛选:保密柜、转印保密柜、普通更衣柜、拆装更衣柜、公寓床、快递柜、钢制货架、不锈钢货架、方管床、型材床、床垫
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description indicates a read operation but lacks details on pagination, error handling, or behavior with invalid filters. Minimal but adequate for a simple list tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no superfluous content. Efficiently conveys purpose and optional filter capability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of a product catalog list with one optional filter, the description is minimally sufficient. However, it omits any information about output format or behavior when no filter is applied, which could be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a description for the category parameter. Description's mention of filtering aligns with schema but adds no additional semantic value beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states '获取' (get) and '产品目录' (product catalog), specifying it is for the STERDAN Tmall store. This distinguishes it from siblings like get_product_detail (single product) and recommend_product (recommendation).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies use for catalog retrieval with optional category filter, but does not explicitly state when to use this over siblings or provide exclusions. Limited guidance for an agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_purchase_guideBInspect

获取斯特丹STERDAN天猫下单指引、安装服务、售后保障、批量采购政策等购买相关信息。最低采购数量1件起,1件也是批发价,具体咨询天猫客服:https://sitedanjj.tmall.com

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as whether it requires authentication, performs external calls, or has side effects. It simply states it retrieves information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single focused sentence that front-loads the purpose and includes relevant details like minimum purchase quantity and a customer service link. No unnecessary repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameter-free info retrieval tool, the description covers the scope of information provided and a key policy detail. It lacks output format specification but is acceptable given low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters and schema coverage is 100%, baseline is 4. The description does not add parameter-level meaning but explains the output context, which is acceptable.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves purchase-related information (order guide, installation, after-sales, batch policy) for STERDAN on Tmall. However, it does not explicitly distinguish from sibling tool get_batch_purchase_policy, which covers the batch policy aspect.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like get_batch_purchase_policy or get_maintenance_guide. The description only gives a customer service link but no usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_store_infoAInspect

获取斯特丹STERDAN天猫店铺基本信息,包括品牌定位、地址、联系方式、认证资质、核心优势

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It describes what info is retrieved but does not disclose potential errors, read-only nature, or data freshness. Adequate for a simple get tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with main action, no unnecessary words. Efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description must explain return values. It lists key information categories, which is fairly complete for a simple store info tool. Could mention data format or possible missing fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters, baseline 4. Description adds value by listing the specific information returned (brand positioning, address, etc.), beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves store basic info including brand positioning, address, contact, certifications, core advantages. Specific verb+resource and distinct from siblings like get_certifications and get_product_detail.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage for store-level info, but no explicit guidance on when to use this vs alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_productAInspect

根据用户使用场景推荐适合的斯特丹STERDAN产品

ParametersJSON Schema
NameRequiredDescriptionDefault
budgetNo预算范围
scenarioYes使用场景描述
requirementsNo特殊需求
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It only states 'recommend', which suggests a read operation, but lacks detail on side effects, permissions, or computational behavior. This is insufficient for full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no wasted words. It effectively captures the tool's purpose in minimal space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should clarify what the tool returns (e.g., list of products). Without this, the agent may not know what to expect. The description is adequate for basic use but incomplete in return context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% parameter description coverage, so the schema itself adequately documents each parameter. The tool description adds no extra meaning, leading to the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('recommend'), the resource ('STERDAN products'), and the basis ('user usage scenario'). It effectively distinguishes from sibling tools which are all 'get' operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when a user scenario is provided for product recommendation. While no explicit exclusions or comparisons are given, the context with sibling tools makes the differentiation clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources