Skip to main content
Glama

MRC Data — China's Apparel Supply Chain Infrastructure

Server Details

China's apparel supply chain data for AI: 1,000+ suppliers, 350+ fabrics, 170+ clusters.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose with clear boundaries: search vs. get operations are consistently separated, bidirectional relationship lookups (get_fabric_suppliers vs get_supplier_fabrics) use unambiguous naming, and detect_discrepancy serves a unique fraud-detection function that doesn't overlap with standard retrieval tools.

Naming Consistency5/5

Strict snake_case throughout with consistent verb_noun patterns: search_*, get_*, compare_*, and detect_* prefixes are applied uniformly. The inverse relationship tools follow a predictable get_[entity]_[related_entities] structure, making the API surface highly predictable.

Tool Count5/5

Ten tools represents an ideal scope for this domain—covering search, detailed retrieval, cross-referencing, comparison, and analytics without bloat. Each tool earns its place: three search tools for the main entities, two detail getters, two relationship lookups, plus comparison, discrepancy detection, and stats.

Completeness4/5

Excellent coverage of the apparel supply chain domain with full CRUD-like read operations, cross-referencing, and fraud detection. Minor asymmetry: suppliers and fabrics have dedicated get_detail tools, while clusters lack a get_cluster_detail equivalent (though compare_clusters with single ID substitutes functionally).

Available Tools

19 tools
analyze_market
Read-only
Inspect

Market overview and analysis for a product category in China.

USE WHEN:

  • User asks "what's the market like for X in China"

  • User wants market intelligence before sourcing

  • User needs an overview, not specific suppliers

  • "市场概况" / "行业分析"

WORKFLOW: Standalone analysis tool. Use this BEFORE search_suppliers to understand market landscape. Then narrow down with search_suppliers or recommend_suppliers. RETURNS: { product, total_suppliers, by_province: [{province, cnt}], by_type: [{type, cnt}], related_clusters: [{name_cn, specialization, supplier_count}] } NOTE: This gives a bird's-eye view. For specific supplier lists, use search_suppliers after.

ParametersJSON Schema
NameRequiredDescriptionDefault
productYesProduct category to analyze (e.g. sportswear, denim, underwear)
check_compliance
Read-only
Inspect

Check if a supplier meets compliance requirements for a target export market.

USE WHEN:

  • User asks "can this factory export to the US/EU/Japan"

  • User needs to verify certifications for a specific market

  • "能不能出口美国" / "合规检查" / "认证要求"

PREREQUISITE: You MUST have a valid supplier_id from search_suppliers. WORKFLOW: search_suppliers → check_compliance (to verify if a specific supplier can export to target market). RETURNS: { supplier, target_market, passed: [string], issues: [string], market_requirements: {field: value} } ERRORS: Returns error if supplier_id not found. Returns note if compliance data is incomplete. NOTE: Many suppliers have incomplete compliance data. Missing data = "not confirmed", not "non-compliant".

ParametersJSON Schema
NameRequiredDescriptionDefault
supplier_idYesSupplier ID from search_suppliers, e.g. sup_001
target_marketYesTarget export market
compare_clustersA
Read-only
Inspect

Compare multiple Chinese apparel industrial clusters side-by-side on key metrics.

PREREQUISITE: You MUST first call search_clusters to obtain valid cluster_ids. Do not guess IDs.

USE WHEN user wants to evaluate or choose between 2-10 specific clusters (e.g. "compare Humen vs Shishi vs Jinjiang"). Returns full records for each cluster so they can be compared on labor cost, rent, supplier count, scale, specializations, advantages, and risks.

WORKFLOW: search_clusters → collect cluster_ids → compare_clusters. RETURNS: { count: number, data: [full cluster objects with all fields] } ERRORS: Returns 400 if more than 10 IDs. Missing IDs are silently skipped. CONSTRAINT: Max 10 cluster IDs per call.

中文:对比多个产业带的核心指标(最多 10 个)。

ParametersJSON Schema
NameRequiredDescriptionDefault
cluster_idsYesArray of cluster IDs to compare, max 10
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description adds valuable behavioral context beyond safety: it discloses that full records are returned (not summaries) and enumerates specific comparison dimensions (labor cost, rent, supplier count, scale, specializations, key advantages, risks). Adds max parallelism constraint (10 clusters).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient four-sentence structure: purpose (sentence 1), usage trigger (sentence 2), return value specification (sentence 3), and Chinese summary (sentence 4). Front-loaded with action, zero redundant text, well-organized with explicit section header 'USE WHEN'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by detailing return content ('full records') and comparison metrics. Workflow context (post-search usage) is present. Single parameter is simple; no additional complexity requires explanation. Complete for the tool's scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear description 'Array of cluster IDs to compare, max 10'. Description mentions 'cluster ID provided' and reinforces the 10-item limit in Chinese text ('最多 10 个'), but schema carries the primary semantic burden. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Compare multiple Chinese apparel industrial clusters side-by-side') and explicitly distinguishes from sibling search_clusters by positioning this as an evaluation step 'typically after search_clusters'. Clear verb + resource combination.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'USE WHEN' section specifying trigger conditions ('evaluate or choose between specific clusters they've identified'). Explicitly names sibling workflow dependency ('typically after search_clusters'), providing clear temporal sequencing guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_suppliers
Read-only
Inspect

Compare multiple suppliers side by side on all dimensions.

USE WHEN user asks:

  • "compare these 3 factories"

  • "which supplier is better between X and Y"

  • "对比供应商"

PREREQUISITE: You MUST have valid supplier_ids from search_suppliers. Do not guess IDs. WORKFLOW: search_suppliers → collect supplier_ids → compare_suppliers (for side-by-side comparison). RETURNS: { count, data: [full supplier profiles with all fields] } ERRORS: Missing IDs are silently skipped. CONSTRAINT: Max 10 supplier IDs per call. Use this instead of calling get_supplier_detail in a loop. DIFFERENCE from get_supplier_detail: This returns multiple suppliers at once for comparison. get_supplier_detail returns one with verified_dimensions breakdown.

ParametersJSON Schema
NameRequiredDescriptionDefault
supplier_idsYesArray of supplier IDs from search_suppliers, e.g. ['sup_001', 'sup_002'], max 10
detect_discrepancyA
Read-only
Inspect

[Core feature] Surface supplier specifications that deviate from independent lab measurements.

USE WHEN user asks:

  • "which fabrics have lab-test deviations on weight"

  • "find suppliers whose stated capacity differs from on-site measurements"

  • "compare cotton content lab results across suppliers"

  • "which suppliers have the closest match between specs and lab tests"

  • "实测数据 / 数据可信度 / 规格与实测偏差"

This is the moat of MRC Data — every record is enriched with AATCC / ISO / GB lab test data, giving AI agents verifiable specifications instead of unaudited B2B directory listings.

Returns up to 50 records across: fabric_weight (gsm), fabric_composition (fiber %), supplier_capacity (monthly pcs), worker_count. Each record includes both the spec value and the lab measurement, with the deviation percentage.

WORKFLOW: Standalone tool — does not require prior search. Call directly with field type and threshold. RETURNS: { field, min_discrepancy_pct, count, data: [{ id, name, declared_value, tested_value, discrepancy_pct }] } ERRORS: Returns count=0 if no discrepancies above threshold. Max 50 records. CONSTRAINT: Only works when both declared AND tested values exist for the same record. Many records have only one or the other.

中文:识别供应商规格与实测值偏差较大的记录。返回规格值、实测值、偏差百分比。

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldYesType of discrepancy to detect: fabric_weight (面料克重) / fabric_composition (成分) / supplier_capacity (产能) / worker_count (工人数)
min_discrepancy_pctNoMinimum discrepancy threshold as percentage (e.g. 10 = only show ≥10% mismatch)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With readOnlyHint=true in annotations, the description appropriately adds behavioral context: it discloses the return format (up to 50 records), ranking logic (by discrepancy percentage), and data included (both declared and verified values). It also explains this is MRC Data's unique verification 'moat'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: core feature, trigger conditions, differentiation rationale, and technical specs. The Chinese translation serves a functional purpose for bilingual routing. While the 'moat' language is slightly promotional, it efficiently communicates unique value without excessive fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description appropriately explains return values (up to 50 ranked records with both values). It covers all 4 detectable discrepancy types and the threshold parameter. Given the analytical nature and good annotations, the description provides complete context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage, the description adds valuable semantic context by mapping abstract field names to business units: fabric_weight (gsm), fabric_composition (fiber %), supplier_capacity (monthly pcs). This helps the LLM understand parameter intent beyond the schema's enum descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (detect discrepancies) and resources (supplier-declared vs lab-verified values). It effectively distinguishes from retrieval-focused siblings by emphasizing cross-checking and verification capabilities, contrasting with 'generic B2B directories' that only show self-reported numbers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The explicit 'USE WHEN' section provides concrete query patterns (e.g., 'which fabrics have under-weight issues', 'fabric composition fraud', Chinese queries like '实测和声称差距') that precisely signal when to invoke this tool versus simple search or retrieval alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

estimate_cost
Read-only
Inspect

Estimate sourcing cost for a product based on fabric price, supplier pricing, and order quantity.

USE WHEN:

  • User asks "how much would it cost to make 1000 t-shirts"

  • User needs a rough cost breakdown for budgeting

  • "多少钱" / "成本估算" / "报价"

WORKFLOW: Standalone tool. Optionally use search_fabrics first to identify specific fabric_ids for more accurate estimates. RETURNS: { product, fabric_options: [{name, price_range}], estimated_cost_per_piece, total_estimate, breakdown } CONSTRAINT: These are estimates based on database averages, NOT binding quotes. Always clarify this to the user. NOTE: Cost accuracy improves when you provide a specific fabric_id instead of just a product name.

ParametersJSON Schema
NameRequiredDescriptionDefault
productYesProduct type (e.g. t-shirt, hoodie, down jacket)
provinceNoPreferred sourcing province
quantityNoOrder quantity in pieces
fabric_categoryNoFabric category: knit, woven, functional
find_alternatives
Read-only
Inspect

Find alternative suppliers similar to a given supplier.

USE WHEN:

  • User says "this supplier is too expensive / too slow / too far"

  • User needs backup options for an existing supplier

  • "有没有替代" / "找类似的" / "换一家"

Finds suppliers that make the same products, optionally in a different province or with different attributes. Results exclude the original supplier.

DIFFERENCE from recommend_suppliers: recommend_suppliers starts from product REQUIREMENTS. This tool starts from a KNOWN supplier_id and finds similar alternatives. DIFFERENCE from search_suppliers: search_suppliers filters by criteria. This tool uses an existing supplier as the baseline reference.

RETURNS: { reference_supplier, alternatives: [supplier objects], attribution } ERRORS: Returns error if supplier_id not found.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
reasonNoWhy looking for alternativesany
provinceNoPreferred province for alternatives
supplier_idYesCurrent supplier ID to find alternatives for
get_cluster_suppliers
Read-only
Inspect

List all suppliers in a specific industrial cluster.

USE WHEN user asks:

  • "what factories are in Humen cluster"

  • "show me suppliers in Keqiao fabric market"

  • "虎门产业带有哪些供应商"

PREREQUISITE: You MUST have a valid cluster_id from search_clusters. WORKFLOW: search_clusters → pick cluster_id → get_cluster_suppliers (to see all factories in that cluster). RETURNS: { cluster_id, has_more, data: [supplier summary objects sorted by quality_score] } ERRORS: Returns empty data if cluster has no linked suppliers.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
offsetNo
cluster_idYesCluster ID from search_clusters, e.g. humen_women, keqiao_fabric, shishi_casual
get_fabric_detailA
Read-only
Inspect

Get the complete lab-tested record of a single fabric by ID.

PREREQUISITE: You MUST first call search_fabrics to obtain a valid fabric_id. Do not guess IDs.

USE WHEN user wants full specs on a specific fabric after search_fabrics identified it. Returns 30+ fields: lab-tested weight, lab-tested composition, color fastness (wash/light/rub per AATCC 61/16/8), shrinkage (warp/weft per AATCC 135), tensile/tear strength, pilling grade, hand feel, drape, stretch/recovery, MOQ, lead time, price range.

WORKFLOW: search_fabrics → pick fabric_id → get_fabric_detail. Optionally follow with get_fabric_suppliers to find which factories supply this fabric. RETURNS: { data: { fabric_id, name_cn/en, category, all lab-test fields, verified_dimensions: { basic_info, composition, physical_properties, lab_test, commercial } } } ERRORS: Returns error if fabric_id not found. Unverified fabrics return "not available". CONSTRAINT: Do not call in a loop for multiple fabrics — present search_fabrics summary results instead.

中文:按 ID 获取单个面料的完整实测档案(含 AATCC/ISO/GB 检测指标)。

ParametersJSON Schema
NameRequiredDescriptionDefault
fabric_idYesFabric ID from search_fabrics results, e.g. FAB-W007
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, and description adds substantial behavioral context: enumerates 30+ specific return fields (weight, composition, color fastness, shrinkage, etc.), confirms lab-tested data source, and describes the payload comprehensively. Minor gap: doesn't specify error behavior for invalid IDs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure: purpose statement → usage trigger → detailed return value specification → localization. Each section earns its place; the enumerated field list substitutes for missing output schema. Bilingual text is appropriate for the domain without being redundant.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Thoroughly compensates for missing output schema by explicitly listing 30+ returned fields and their categories. Establishes clear relationship to search_fabrics sibling. For a single-parameter lookup tool, the description provides exhaustive context for successful invocation and result interpretation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (fabric_id well documented with examples). Description mentions 'by ID' which aligns with schema but doesn't add additional semantic meaning, format constraints, or validation rules beyond the schema definition. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'Get' + resource 'lab-tested record of a single fabric' + scope 'by ID'. Explicitly distinguishes from search_fabrics sibling by specifying this retrieves a single record by ID versus searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'USE WHEN' directive stating exactly when to invoke (user wants full specs on specific fabric) and workflow context (typically after search_fabrics), clearly positioning it in the tool chain and indicating the prerequisite search step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_fabric_suppliersA
Read-only
Inspect

List all suppliers offering a specific fabric, sorted by quality score, with price comparison.

USE WHEN user asks:

  • "who supplies fabric fab_XXX" / "where can I buy this fabric"

  • "compare prices for [fabric] across suppliers"

  • "best supplier for [fabric specification]"

Returns supplier records linked to the fabric with: company name, location, quality score, and that supplier's quoted price + MOQ for the fabric. Sorted by supplier quality score so the most reliable options appear first.

PREREQUISITE: You MUST have a valid fabric_id from search_fabrics. WORKFLOW: search_fabrics → pick fabric_id → get_fabric_suppliers (to compare which factories supply it at what price). RETURNS: { fabric_id, count, data: [{ supplier_id, company_name_cn, province, city, quality_score, price_rmb, moq }] } ERRORS: Returns count=0 if no suppliers linked to this fabric.

中文:查询某面料的所有供应商,按质量评分排序,含报价对比。

ParametersJSON Schema
NameRequiredDescriptionDefault
fabric_idYesFabric ID from search_fabrics, e.g. FAB-W007
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With readOnlyHint already declaring this a safe read operation, the description adds valuable behavioral context including the sorting logic ('Sorted by supplier quality score') and the complete return payload structure ('company name, location, quality score, and that supplier's quoted price + MOQ'). It discloses the ranking algorithm so agents understand why results appear in a specific order. The score is 4 rather than 5 due to omitted details regarding pagination, rate limits, or behavior when no suppliers exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description employs efficient section headers ('USE WHEN', 'Returns') to organize information hierarchically, with the core value proposition stated in the opening sentence. While it includes a Chinese translation that technically duplicates content, this serves legitimate localization purposes for bilingual contexts. The text avoids redundancy with structured schema data, though the explicit field listing in the Returns section slightly exceeds minimal necessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Absent an output schema, the description effectively compensates by detailing the return structure (supplier records with company name, location, quality score, price, and MOQ) and explaining the sorting mechanism. For a single-parameter read-only tool, this coverage is sufficient for an agent to understand both the request and response contracts. It could achieve a 5 by specifying edge case behavior (empty results) or pagination capabilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema provides complete coverage for the single fabric_id parameter with the description 'Fabric ID', establishing a baseline of 3 per the scoring guidelines. While the description references 'fabric fab_XXX' in usage examples, it does not elaborate on parameter semantics beyond what the schema already provides. Given the 100% schema coverage, the description appropriately focuses on behavioral aspects rather than compensating for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'List' targeting 'suppliers offering a specific fabric', clearly defining the resource and action. It distinguishes from siblings like get_supplier_fabrics (inverse relationship) and get_fabric_detail by focusing on supplier discovery for a specific fabric. The explicit mention of 'sorted by quality score' and 'price comparison' further differentiates it from general supplier search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes an explicit 'USE WHEN' section with concrete user query patterns like 'who supplies fabric fab_XXX' and 'compare prices for [fabric]'. These examples provide clear guidance on when to select this tool versus alternatives such as get_supplier_detail (single supplier lookup) or search_suppliers (general search). The conditional triggers map directly to the tool's specific capability of listing suppliers for a given fabric ID.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_product_categories
Read-only
Inspect

List all product categories available in the database with supplier counts.

USE THIS FIRST when:

  • User doesn't know what to search for

  • User asks "what do you have" / "what can I source"

  • User needs to explore the database

  • "有哪些品类" / "能找什么"

WORKFLOW: Standalone discovery tool. Call this first to understand what's available, then use search_suppliers with a specific product_type. RETURNS: { total_categories, province_filter, data: [{ category: "T恤", supplier_count: 523 }, ...] } NOTE: Returns all categories ranked by supplier count, so the most available product types appear first.

ParametersJSON Schema
NameRequiredDescriptionDefault
provinceNoFilter by province (e.g. guangdong, 广东)
get_province_distribution
Read-only
Inspect

Show supplier distribution across Chinese provinces.

USE WHEN:

  • User asks "where are factories located" / "which provinces"

  • User needs to decide which region to source from

  • "哪里有工厂" / "供应商分布"

WORKFLOW: Standalone discovery tool. Use this to identify which provinces to focus on, then search_suppliers with that province. RETURNS: { total_provinces, data: [{ province, supplier_count, top_cities: [{ city, count }] }] } NOTE: Provinces are ranked by supplier count (Guangdong, Zhejiang, Jiangsu, Fujian typically lead).

ParametersJSON Schema
NameRequiredDescriptionDefault
product_typeNoFilter by product type (e.g. sportswear, t-shirt, 运动服)
get_statsA
Read-only
Inspect

Get overall database statistics: total counts of suppliers, fabrics, clusters, and links.

USE WHEN user asks: "how big is your database", "what's the coverage", "data overview", "get_stats".

RETURNS: { database, generated_at, tables: { suppliers: { total, by_confidence, last_updated }, fabrics: {...}, clusters: {...}, supplier_fabrics: { total } } } NOTE: Only reports verified + partially_verified records.

中文:获取数据库整体统计(供应商总数、面料总数、产业带总数、关联记录数)。

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true indicating safe read operation. Description adds specific disclosure of what data is returned (the four count categories), providing necessary behavioral context beyond the safety annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with purpose statement, usage guidance section, and bilingual support. Every sentence serves distinct function; no redundant or tautological content despite inclusion of Chinese translation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter aggregation tool without output schema, description adequately specifies return semantics by listing all four counted entities. Complexity level matches description depth appropriately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present with 100% schema coverage (trivially satisfied). Description appropriately focuses on return value semantics rather than input parameters, meeting baseline for zero-param tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with clear resource 'database statistics' and enumerates exact scope (suppliers, fabrics, clusters, links). Distinct from siblings which focus on specific item retrieval/search rather than aggregate counts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'USE WHEN' section with three specific query patterns ('how big is your database', 'what's the coverage', 'data overview') that clearly distinguish this from sibling search/detail tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_supplier_detailA
Read-only
Inspect

Get the complete profile of a single Chinese apparel supplier by ID.

PREREQUISITE: You MUST first call search_suppliers or recommend_suppliers to obtain a valid supplier_id. Do not guess IDs.

USE WHEN user wants full details on a specific supplier already identified from search results. Returns 60+ fields including: monthly capacity (lab-verified), equipment list, certifications (BSCI/OEKO-TEX/GRS/SA8000), ownership type (own factory vs subcontractor vs broker), market access (US/EU/JP/KR), chemical compliance (ZDHC/MRSL), traceability depth, and verified_dimensions breakdown showing exactly which of the 8 dimensions (basic_info, geo_location, production, compliance, market_access, export, financial, contact) have data.

WORKFLOW: search_suppliers → pick supplier_id → get_supplier_detail → optionally get_supplier_fabrics for their fabric catalog. RETURNS: { data: { supplier_id, company_name_cn/en, type, province, city, product_types, worker_count, certifications, compliance_status, quality_score, verified_dimensions: { verified_dims: "5/8", coverage_pct, dimensions: {...} } } } ERRORS: Returns error object if supplier_id not found. Unverified suppliers return "not available for public access". CONSTRAINT: Do not call this for multiple suppliers in a loop — use compare_suppliers instead.

中文:按 ID 获取单个供应商的完整档案(含维度覆盖率详情)。

ParametersJSON Schema
NameRequiredDescriptionDefault
supplier_idYesSupplier ID from search_suppliers results, e.g. sup_001
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds substantial behavioral context beyond the readOnlyHint annotation. It comprehensively lists the 50+ returned fields including specific certification standards (BSCI/OEKO-TEX/GRS/SA8000), ownership types, market access regions, and compliance frameworks (ZDHC/MRSL). This specific disclosure of data richness and structure significantly aids the agent in understanding what data payload to expect from this read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with the core purpose ('Get complete profile...'), followed by explicit usage guidance ('USE WHEN...'), detailed return value documentation, and a concise Chinese translation. The structure logically progresses from what, when, to outcome. No wasted sentences—every line adds unique value either for selection or invocation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description thoroughly compensates by enumerating the extensive field categories and specific data points returned (capacity types, equipment lists, certifications). It also maps the tool into the broader workflow context (post-search_suppliers usage), which is critical given the sibling relationships. Complete for a detail-retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (supplier_id fully documented with examples). Baseline is therefore 3. The description adds workflow context that the ID refers to a supplier 'already identified' from search results, which augments the raw schema definition. It doesn't elaborate on ID format/syntax beyond the schema's examples, but the contextual usage guidance provides meaningful semantic enhancement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Get' with precise resource 'complete profile' and scope restriction 'single Chinese apparel supplier by ID'. Explicitly distinguishes from sibling search_suppliers (list/search vs. detail retrieval) through the 'USE WHEN' guidance referencing the typical post-search workflow. No ambiguity about what distinguishes this from get_supplier_fabrics or get_fabric_suppliers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains exemplary explicit guidance: 'USE WHEN user wants full details on a specific supplier already identified (typically after search_suppliers returns matches)'. Names the sibling alternative directly, clarifies the prerequisite state (post-search), and defines the trigger condition (full details needed). This is a model of clear when-to-use documentation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_supplier_fabricsA
Read-only
Inspect

List all fabrics a specific supplier can provide, with quoted prices.

USE WHEN user asks:

  • "what fabrics does [supplier name] have" / "what can this factory source for me"

  • "show me the catalog of supplier sup_XXX"

  • "what does this manufacturer offer"

Returns fabric records linked to the supplier with: fabric name, category, weight, composition, and the supplier's quoted price + MOQ for that specific fabric.

PREREQUISITE: You MUST have a valid supplier_id from search_suppliers or get_supplier_detail. WORKFLOW: search_suppliers → get_supplier_detail → get_supplier_fabrics (to see their fabric catalog). RETURNS: { supplier_id, count, data: [{ fabric_id, name_cn, category, weight, composition, price_rmb, moq }] } ERRORS: Returns count=0 if supplier has no linked fabrics.

中文:查询某供应商能供应的所有面料及其报价、起订量。

ParametersJSON Schema
NameRequiredDescriptionDefault
supplier_idYesSupplier ID from search_suppliers, e.g. sup_001
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description adds substantial return-value context: specific fields returned (fabric name, category, weight, composition, quoted price + MOQ) and the linkage model ('linked to the supplier'). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose statement, USE WHEN triggers, return value specification, and Chinese translation. Front-loaded with the core action. Each sentence earns its place; bilingual support justifies the additional length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read operation, description comprehensively covers: query intent, expected return structure (fabric records with 6 specific fields), business context (quoted prices, MOQ), and trigger conditions. No output schema exists, but description adequately compensates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (single parameter 'supplier_id' with description 'Supplier ID'). Description implies the ID format through example 'sup_XXX' in usage guidelines, but does not substantially augment the schema's semantic documentation. Baseline 3 appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'List' + resource 'fabrics' + scope 'a specific supplier can provide' + value-add 'with quoted prices'. Clearly distinguishes from sibling get_fabric_suppliers (inverse relationship) and get_fabric_detail (single item vs catalog).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Exceptional 'USE WHEN' section provides four concrete trigger phrases including 'what fabrics does [supplier name] have' and 'show me the catalog of supplier sup_XXX'. Explicitly maps user intent to tool selection, eliminating ambiguity with inverse operation get_fabric_suppliers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_suppliers
Read-only
Inspect

Smart supplier recommendation based on sourcing requirements.

USE WHEN:

  • User describes what they need: "I need a factory for cotton t-shirts in Guangdong"

  • User asks for recommendations, not just search results

  • "推荐供应商" / "帮我找合适的工厂"

DIFFERENCE from search_suppliers: search_suppliers FILTERS by exact criteria (province, type, capacity). This tool RANKS by fit — prioritizes own-factory, then quality score, then capacity. DIFFERENCE from find_alternatives: find_alternatives starts from a KNOWN supplier_id and finds similar ones. This tool starts from product REQUIREMENTS.

RETURNS: { query, total_matches, showing_top, note: "ranking logic", data: [supplier objects] } ERRORS: Returns empty data if no product match found.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoPrefer own factory or trading company
limitNo
productYesWhat product to source (e.g. sportswear, t-shirt, down jacket)
provinceNoPreferred province
search_clustersA
Read-only
Inspect

Search Chinese apparel industrial clusters and textile markets.

USE WHEN user asks:

  • "where is China's [denim / suit / women's wear / underwear] manufacturing concentrated"

  • "what is the largest [silk / cashmere / down jacket] industrial cluster in China"

  • "industrial cluster comparison Humen vs Shaoxing vs Haining vs Zhili"

  • "recommend an industrial cluster for sourcing [product]"

  • "服装产业带 / 面料市场 / 产业集群"

Famous clusters this database covers include: Humen (Guangdong, womenswear), Shaoxing Keqiao (Zhejiang, fabric mega-market), Haining (Zhejiang, leather), Zhili (Zhejiang, children's wear), Shengze (Jiangsu, silk), Shantou (Guangdong, underwear), Puning (Guangdong, jeans), Jinjiang (Fujian, sportswear), and more.

Returns paginated cluster list with name, location, specialization, scale, supplier count, average rent and labor cost, and key advantages/risks.

WORKFLOW: Use this to discover clusters. Then use compare_clusters with cluster_ids to compare side-by-side, or get_cluster_suppliers to list factories in a specific cluster. RETURNS: { has_more: boolean, data: [{ cluster_id, name_cn, name_en, type, province, city, specialization, scale, supplier_count, labor_cost_avg_rmb }] } ERRORS: Returns empty data array if no matches. FALLBACK: If no results for a specialization, try broader terms (e.g. "服装" instead of "牛仔"). Chinese and English both work.

中文:搜索中国服装产业带和面料市场。

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoCluster type: fabric_market (面料市场) / garment_manufacturing (服装制造) / accessories (辅料) / integrated (综合)
limitNo
scaleNoCluster scale: mega / large / medium / small
offsetNo
provinceNoProvince in China (e.g. Guangdong, Zhejiang, Jiangsu, Fujian, Shandong)
specializationNoPrimary specialization keyword (e.g. 牛仔 denim, 女装 womenswear, 童装 childrenswear, 内衣 underwear, 运动服 sportswear)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, confirming safe read access. The description adds valuable behavioral context absent from annotations: it specifies pagination ('Returns paginated cluster list') and details the exact data fields returned (rent, labor costs, advantages/risks), compensating for the missing output_schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent information density with clear visual hierarchy: one-sentence purpose, bulleted use cases, named cluster examples, return value disclosure, and Chinese translation. Every sentence serves selection or invocation. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero required parameters and no output schema, the description fully compensates by explaining what filtering dimensions exist (via USE WHEN examples) and detailing the return payload structure. The enumeration of famous clusters (Humen, Zhili, etc.) provides critical domain grounding for an AI to match user queries correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 67% (4/6 params described). While the description doesn't explicitly map parameters, it provides rich semantic examples in the USE WHEN block (e.g., 'denim', 'womenswear', 'childrenswear') that implicitly clarify the 'specialization' and 'type' parameters. Standard pagination params (limit/offset) lack description but are conventionally understood.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb-noun phrase ('Search Chinese apparel industrial clusters and textile markets') and immediately distinguishes scope from siblings like search_fabrics or search_suppliers by listing famous clusters it covers (Humen, Shaoxing Keqiao, etc.). The domain specificity (apparel/textile) is precisely bounded.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'USE WHEN' section provides explicit, concrete trigger phrases ('where is China's [denim] manufacturing concentrated', 'recommend an industrial cluster for sourcing') that condition the model to invoke this tool versus alternatives like compare_clusters or search_suppliers. Including Chinese keywords ('服装产业带') further sharpens invocation criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_fabricsA
Read-only
Inspect

Search the Chinese fabric and textile database with lab-tested specifications.

USE WHEN user asks:

  • "find me a [cotton / polyester / nylon / wool / linen] fabric for [t-shirts / jeans / suits]"

  • "I need 180gsm jersey knit with verified composition"

  • "fabrics under N RMB/meter for womenswear"

  • "compare lab-tested fabric weight across suppliers"

  • "找面料 / 搜面料 / 查面料"

Filters: category (woven/knit/nonwoven/leather/functional), weight range (gsm), composition keyword, target apparel type, max price. Returns paginated fabric list with name, lab-tested weight, lab-tested composition, price range, suitable apparel, and data confidence level.

WORKFLOW: Use this as the entry point for fabric discovery. After finding a fabric, use get_fabric_detail for full lab-test data, or get_fabric_suppliers to see which factories supply it. RETURNS: { has_more: boolean, available_dimensions: ["basic_info","composition","physical_properties","lab_test","commercial"], data: [{ fabric_id, name_cn, category, subcategory, declared_weight_gsm, declared_composition, price_range_rmb, suitable_for, verified_dims: "4/5", coverage_pct }] } ERRORS: Returns empty data array if no matches. Max 50 per page. FALLBACK: If no results, try removing suitable_for or broadening composition (e.g. "cotton" instead of "organic cotton"). Do not call more than 3 times for the same question. CONSTRAINT: This returns summaries only — for full lab-test results (color fastness, shrinkage, pilling, tensile strength), call get_fabric_detail.

中文:搜索面料数据库,按品类、克重、成分、适用品类、价格筛选。每条均含 AATCC / ISO / GB 方法的实测数据。

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
offsetNo
categoryNoFabric category: woven (梭织) / knit (针织) / nonwoven (无纺) / leather (皮革) / fur (毛皮) / functional (功能性)
compositionNoFiber composition keyword (e.g. cotton, polyester, spandex, nylon, wool, linen, 棉, 涤纶)
suitable_forNoTarget apparel keyword (e.g. T恤 t-shirt, 衬衫 shirt, 牛仔 denim, 连衣裙 dress)
max_price_rmbNoMaximum price in RMB per meter
max_weight_gsmNoMaximum fabric weight in grams per square meter
min_weight_gsmNoMinimum fabric weight in grams per square meter
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description adds valuable behavioral context including pagination ('Returns paginated fabric list'), data quality features ('declared+tested' comparison), and confidence levels that help the agent understand the unique lab-verified dataset.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with distinct sections (purpose, USE WHEN, filters, returns, Chinese translation). Every section earns its place; bilingual support is appropriate for a Chinese database tool. Slightly dense but information-rich without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema, comprehensively describes return structure (name, tested weight, price range, confidence level). Implicitly signals all parameters are optional via 'Filters' language, though explicit 'all filters optional' statement would strengthen it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 75% with clear descriptions. The description adds semantic grouping ('Filters: category...weight range...') that maps parameters to user intents (e.g., 'target apparel type' for suitable_for parameter), though it omits explicit mention of pagination controls (limit/offset).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb+resource ('Search the Chinese fabric and textile database') and distinguishes from siblings like search_suppliers and search_clusters by emphasizing 'lab-tested specifications' and filterable fabric attributes (category, weight, composition).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent 'USE WHEN' section with 5 concrete example queries including Chinese commands (找面料), clearly signaling appropriate triggers. However, lacks explicit redirection to siblings (e.g., 'use search_suppliers for vendor queries') when this tool is inappropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_suppliersA
Read-only
Inspect

Search verified Chinese apparel manufacturers, apparel factories, and clothing suppliers.

USE WHEN user asks:

  • "find me a clothing manufacturer in China / Guangdong / Zhejiang"

  • "who makes [t-shirts / suits / denim / activewear] in China"

  • "I need a BSCI / OEKO-TEX certified apparel factory"

  • "looking for OEM / ODM apparel supplier with MOQ < N"

  • "find factories with production capacity > N pieces/month"

  • "搜供应商 / 找服装厂 / 找制衣厂"

Filters: province, city, factory type (factory/trading_company/workshop), product category, minimum monthly capacity, compliance status, quality score. Returns paginated supplier list with company name, location, monthly capacity (lab-verified), compliance, quality score.

WORKFLOW: This is the primary entry point for supplier discovery. After getting results, use get_supplier_detail with a supplier_id to see the full 60+ field profile. RETURNS: { has_more: boolean, available_dimensions: string[], data: [{ supplier_id, company_name_cn, company_name_en, type, province, city, product_types, quality_score, verified_dims: "5/8", coverage_pct }] } ERRORS: Returns empty data array if no matches. Max 50 results per page. FALLBACK: If no results, try broadening: remove city (keep province), remove product_type, or lower min_capacity. Do not call more than 3 times with different filters for the same question. NOTE: Use this for FILTERING by exact criteria. For ranked recommendations based on sourcing needs, use recommend_suppliers instead.

中文:搜索经过核查的中国服装供应商档案,按地区、类型、产能、品类、合规状态等筛选。

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity name
typeNoSupplier type
limitNo
queryNoSearch by company name — Chinese (广州新鑫) or English (Xinxin Garments)
offsetNo
provinceNoProvince in China (e.g. 广东 Guangdong, 浙江 Zhejiang, 江苏 Jiangsu, 福建 Fujian, 山东 Shandong)
min_capacityNoMinimum monthly production capacity (pieces)
product_typeNoProduct category keyword (e.g. 西装 suits, 女装 womenswear, 牛仔 denim, 运动服 activewear, t-shirt, 衬衫 shirts)
data_confidenceNoData quality filter: verified / partially_verified / unverified
compliance_statusNoCompliance status filter: compliant / partially_compliant / non_compliant
min_quality_scoreNoMinimum quality score 1-10
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true. Description adds valuable context beyond annotations: discloses pagination ('Returns paginated supplier list'), specifies return fields (company name, location, declared+verified capacity), and notes data verification levels. Could be improved with rate limit or auth context, but solid for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose, clear 'USE WHEN' bullet section, filter summary, and return value description. Chinese translation adds value without clutter. Slightly verbose but earns its length through specific example coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema, description comprehensively compensates by detailing return structure (pagination, specific fields). Covers all 10 parameters via filter listing, explains verification/data confidence concepts, and addresses both English and Chinese use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 80%, establishing baseline 3. Description lists available filters ('province, city, factory type...') but largely repeats schema descriptions without adding syntax details, business logic, or parameter relationships. Schema already carries descriptive load effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb 'Search' targeting clear resource ('verified Chinese apparel manufacturers, apparel factories, and clothing suppliers'). Effectively distinguishes from siblings like 'search_fabrics' and 'get_fabric_suppliers' by emphasizing apparel/clothing scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent 'USE WHEN' section with six specific query patterns covering location, product types, certifications, and Chinese language queries. Provides concrete trigger phrases for the LLM. Lacks explicit 'when not to use' or named sibling alternatives (e.g., when to use get_supplier_detail instead), preventing a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources