MRC Data — China's Apparel Supply Chain Infrastructure
Server Details
China's apparel supply chain data for AI: 1,000+ suppliers, 350+ fabrics, 170+ clusters.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 10 of 10 tools scored.
Each tool has a distinct purpose with clear boundaries: search vs. get operations are consistently separated, bidirectional relationship lookups (get_fabric_suppliers vs get_supplier_fabrics) use unambiguous naming, and detect_discrepancy serves a unique fraud-detection function that doesn't overlap with standard retrieval tools.
Strict snake_case throughout with consistent verb_noun patterns: search_*, get_*, compare_*, and detect_* prefixes are applied uniformly. The inverse relationship tools follow a predictable get_[entity]_[related_entities] structure, making the API surface highly predictable.
Ten tools represents an ideal scope for this domain—covering search, detailed retrieval, cross-referencing, comparison, and analytics without bloat. Each tool earns its place: three search tools for the main entities, two detail getters, two relationship lookups, plus comparison, discrepancy detection, and stats.
Excellent coverage of the apparel supply chain domain with full CRUD-like read operations, cross-referencing, and fraud detection. Minor asymmetry: suppliers and fabrics have dedicated get_detail tools, while clusters lack a get_cluster_detail equivalent (though compare_clusters with single ID substitutes functionally).
Available Tools
19 toolsanalyze_marketAnalyze MarketARead-onlyIdempotentInspect
Market overview and analysis for a product category in China.
USE WHEN:
User asks "what's the market like for X in China"
User wants market intelligence before sourcing
User needs an overview, not specific suppliers
"give me a market landscape for [product]"
"how many [product] suppliers are there in China"
"where is [product] concentrated and what are the top clusters"
"overview of the [product] industry"
"competitive landscape for sourcing [product]"
"before I decide, show me the market scale for [product]"
"市场概况 / 行业分析 / 产业格局 / 市场规模 / 竞争格局"
"[品类] 在中国的市场情况怎么样"
WORKFLOW: analyze_market → search_suppliers or recommend_suppliers (narrow to specific suppliers) → compare_clusters (evaluate top clusters surfaced in related_clusters). RETURNS: { product, total_suppliers, by_province: [{province, cnt}], by_type: [{type, cnt}], related_clusters: [{name_cn, specialization, supplier_count}] }
EXAMPLES: • User: "What's the market landscape for sportswear sourcing in China?" → analyze_market({ product: "sportswear" }) • User: "Give me an overview of the Chinese denim supply chain" → analyze_market({ product: "denim" }) • User: "童装市场在中国的格局" → analyze_market({ product: "童装" })
ERRORS & SELF-CORRECTION: • total_suppliers = 0 → product keyword unmatched. Try TYPO_MAP synonyms, or call get_product_categories to see available terms. • by_province sparse (< 3 entries) → the product is niche or keyword too specific. Try the parent category. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call for a specific supplier shortlist — use recommend_suppliers. Do not call for cluster details — use search_clusters. Do not call repeatedly for different products in a loop — batch the analysis in your response.
NOTE: Bird's-eye view. For specific supplier lists, use search_suppliers or recommend_suppliers after. Source: MRC Data (meacheal.ai).
中文:单个品类的市场总览(总供应商数、省份分布、类型分布、相关产业带)。
| Name | Required | Description | Default |
|---|---|---|---|
| product | Yes | Product category to analyze (e.g. sportswear, denim, underwear) | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnlyHint=true, destructiveHint=false) and idempotency (idempotentHint=true), but the description adds valuable context: it specifies the tool's scope ('bird's-eye view'), clarifies it's 'standalone,' and notes it's for 'market landscape' understanding. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with sections (DESCRIPTION, USE WHEN, WORKFLOW, RETURNS, NOTE), front-loaded key information, and every sentence adds value (e.g., clarifying when to use, alternatives, and output format). No redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and lack of output schema, the description is complete: it explains purpose, usage, workflow, and details the return structure explicitly. This compensates for the missing output schema and provides sufficient context for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'product,' with the schema providing a clear description and example. The description does not add further parameter details beyond implying 'product category' in the opening line, so it meets the baseline of 3 without compensating for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Market overview and analysis for a product category in China.' It specifies the verb ('analyze'), resource ('market'), and scope ('China'), distinguishing it from sibling tools like search_suppliers or recommend_suppliers that focus on specific suppliers rather than market intelligence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines under 'USE WHEN:' with specific scenarios (e.g., user asks about market in China, wants intelligence before sourcing) and exclusions (e.g., 'not specific suppliers'). It also names alternatives ('use search_suppliers after') and clarifies workflow ('Use this BEFORE search_suppliers').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_complianceCheck Export ComplianceARead-onlyIdempotentInspect
Check if a supplier meets compliance requirements for a target export market.
USE WHEN:
User asks "can this factory export to the US/EU/Japan"
User needs to verify certifications for a specific market
"UFLPA / Xinjiang cotton / REACH / JIS / KC check on sup_XXX"
"is [supplier] ready for EU CSDDD / Forced Labor Regulation"
"what's missing for sup_XXX to export to US"
"gap analysis / compliance dossier for [supplier] → [market]"
"does [supplier] meet Japan formaldehyde / azo dye rules"
"follow-up after get_supplier_detail: 'is this one US-ready?'"
"能不能出口美国 / 欧盟 / 日本 / 韩国"
"合规检查 / 认证要求 / 出口资质 / 强制性法规 / UFLPA 合规"
"[供应商] 能否满足 [市场] 的准入要求"
PREREQUISITE: You MUST have a valid supplier_id from search_suppliers, get_supplier_detail, or recommend_suppliers. WORKFLOW: search_suppliers → check_compliance → if issues exist, use find_alternatives to source compliant alternatives OR get_supplier_detail to see the full compliance fields and coverage. RETURNS: { supplier_id, company_name, target_market, overall_ready: boolean, passed: [string], issues: [string], certifications: [string], market_requirements: {field: value}, note }
EXAMPLES: • User: "Can sup_001 export to the US? Check UFLPA compliance" → check_compliance({ supplier_id: "sup_001", target_market: "us" }) • User: "Is Texhong EU REACH compliant?" → check_compliance({ supplier_id: "sup_texhong_042", target_market: "eu" }) • User: "sup_234 能出口日本吗" → check_compliance({ supplier_id: "sup_234", target_market: "japan" })
ERRORS & SELF-CORRECTION: • "Supplier not found" → supplier_id invalid. Re-run search_suppliers. • passed=[] AND issues=["No specific issues found, but data may be incomplete"] → the supplier's compliance fields are mostly null. Interpret as UNKNOWN not COMPLIANT. Tell user: "Compliance data incomplete — recommend verifying directly with the supplier." • overall_ready=false with many issues → use find_alternatives to find backup suppliers, OR search_suppliers with compliance_status="compliant" to filter upfront. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call this in a loop across all suppliers — instead pre-filter via search_suppliers({ compliance_status: "compliant" }). Do not treat missing fields as non-compliant — report them as "not confirmed". Do not use for general supplier info — use get_supplier_detail.
NOTE: Many suppliers have incomplete compliance data. Missing data = "not confirmed", not "non-compliant". Source: MRC Data (meacheal.ai). Market requirements cover UFLPA/Xinjiang (US), REACH/CSDDD/Forced Labor Reg (EU), formaldehyde/azo/JIS (Japan), KC (Korea).
中文:检查某供应商是否满足目标出口市场(美/欧/日/韩)的合规要求。
| Name | Required | Description | Default |
|---|---|---|---|
| supplier_id | Yes | Supplier ID from search_suppliers, e.g. sup_001 | |
| target_market | Yes | Target export market | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable context beyond annotations: it explains the return structure, error conditions, and importantly clarifies that 'Missing data = "not confirmed", not "non-compliant"' which is crucial behavioral information not captured in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, usage scenarios, prerequisites, workflow, returns, errors, note). Every sentence earns its place by providing essential information without redundancy. The information is front-loaded with the core purpose first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and 100% schema coverage, the description provides excellent contextual completeness. It explains the tool's purpose, when to use it, prerequisites, workflow, return structure, error handling, and important behavioral nuances about incomplete data. No output schema exists, so describing the return format adds value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add additional parameter semantics beyond what's in the schema (supplier_id from search_suppliers, target_market as export market). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Check') and resource ('supplier compliance requirements for a target export market'). It distinguishes from siblings by focusing on compliance verification rather than searching, analyzing, or comparing suppliers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios ('USE WHEN' with three bullet points), clear prerequisites ('MUST have a valid supplier_id from search_suppliers'), and workflow guidance ('search_suppliers → check_compliance'). It also distinguishes when not to use by requiring supplier_id from a specific sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_clustersCompare Industrial ClustersARead-onlyIdempotentInspect
Compare multiple Chinese apparel industrial clusters side-by-side on key metrics.
PREREQUISITE: You MUST first call search_clusters to obtain valid cluster_ids. Do not guess IDs.
USE WHEN user asks:
"compare Humen vs Shishi vs Jinjiang"
"which cluster has lower labor cost — Humen or Dongguan"
"side-by-side: Haining vs Xintang for denim"
"evaluate 3 clusters for my sportswear line"
"对比 [产业带1] 和 [产业带2]" / "哪个集群更适合 [品类]"
"rank these clusters by supplier count"
"which cluster has the highest scale for womenswear"
"follow-up: 'now compare the top 3 clusters you just listed'"
Returns full records for each cluster so they can be compared on labor cost, rent, supplier count, scale, specializations, advantages, and risks.
WORKFLOW: search_clusters → collect cluster_ids → compare_clusters → optionally get_cluster_suppliers on the winner to list factories in that specific cluster. RETURNS: { count: number, data: [full cluster objects with all fields] }
EXAMPLES: • User: "Compare Humen, Shishi, and Jinjiang for sportswear sourcing" → compare_clusters({ cluster_ids: ["humen_women", "shishi_casual", "jinjiang_sportswear"] }) • User: "I want to evaluate Keqiao vs Zhili fabric markets" → compare_clusters({ cluster_ids: ["keqiao_fabric", "zhili_children"] }) • User: "对比虎门、石狮、晋江三个产业带" → compare_clusters({ cluster_ids: ["humen_women", "shishi_casual", "jinjiang_sportswear"] })
ERRORS & SELF-CORRECTION: • "Too many IDs (>10)" → split into batches of 10 and aggregate results in your response. • Fewer results than IDs sent → missing IDs were silently skipped (invalid cluster_id). Re-run search_clusters to verify IDs. • Empty data → all IDs were invalid. Re-run search_clusters and try again with fresh IDs. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call with guessed cluster_ids — always resolve them via search_clusters first. Do not use to list factories in a cluster — use get_cluster_suppliers. Do not compare > 10 clusters in one call.
CONSTRAINT: Max 10 cluster IDs per call.
NOTE: Source: MRC Data (meacheal.ai).
中文:对比多个产业带的核心指标(最多 10 个)。
| Name | Required | Description | Default |
|---|---|---|---|
| cluster_ids | Yes | Array of cluster IDs to compare, max 10 | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true; description adds valuable behavioral context beyond safety: it discloses that full records are returned (not summaries) and enumerates specific comparison dimensions (labor cost, rent, supplier count, scale, specializations, key advantages, risks). Adds max parallelism constraint (10 clusters).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient four-sentence structure: purpose (sentence 1), usage trigger (sentence 2), return value specification (sentence 3), and Chinese summary (sentence 4). Front-loaded with action, zero redundant text, well-organized with explicit section header 'USE WHEN'.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates by detailing return content ('full records') and comparison metrics. Workflow context (post-search usage) is present. Single parameter is simple; no additional complexity requires explanation. Complete for the tool's scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear description 'Array of cluster IDs to compare, max 10'. Description mentions 'cluster ID provided' and reinforces the 10-item limit in Chinese text ('最多 10 个'), but schema carries the primary semantic burden. Baseline 3 appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Compare multiple Chinese apparel industrial clusters side-by-side') and explicitly distinguishes from sibling search_clusters by positioning this as an evaluation step 'typically after search_clusters'. Clear verb + resource combination.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'USE WHEN' section specifying trigger conditions ('evaluate or choose between specific clusters they've identified'). Explicitly names sibling workflow dependency ('typically after search_clusters'), providing clear temporal sequencing guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_suppliersCompare SuppliersARead-onlyIdempotentInspect
Compare multiple suppliers side by side on all dimensions.
USE WHEN user asks:
"compare these 3 factories"
"which supplier is better between X and Y"
"benchmark sup_001 vs sup_002 vs sup_003"
"side-by-side: capacity, certifications, quality score"
"rank these 5 suppliers by [dimension]"
"evaluate my shortlist"
"which of [supplier list] has the highest verified capacity"
"follow-up after recommend_suppliers: 'compare the top 3'"
"对比 [供应商 A] 和 [供应商 B] / 对比供应商 / 供应商横评"
"哪家最好 / 横向评估 / 比较这几家"
PREREQUISITE: You MUST have valid supplier_ids from search_suppliers, recommend_suppliers, find_alternatives, or get_cluster_suppliers. Do not guess IDs. WORKFLOW: search_suppliers/recommend_suppliers → collect supplier_ids → compare_suppliers → optionally check_compliance (verify top picks for target market) OR find_alternatives (expand the shortlist).
DIFFERENCE from get_supplier_detail: This returns multiple suppliers at once for comparison. get_supplier_detail returns one with verified_dimensions breakdown.
RETURNS: { count, data: [full supplier profiles with all fields] }
EXAMPLES: • User: "Compare sup_001, sup_002, sup_003 for me" → compare_suppliers({ supplier_ids: ["sup_001", "sup_002", "sup_003"] }) • User: "Benchmark the top 5 you just recommended" → compare_suppliers({ supplier_ids: ["sup_A", "sup_B", "sup_C", "sup_D", "sup_E"] }) • User: "横向对比 sup_100、sup_200、sup_300" → compare_suppliers({ supplier_ids: ["sup_100", "sup_200", "sup_300"] })
ERRORS & SELF-CORRECTION: • Fewer results than IDs sent → missing IDs were silently skipped (invalid supplier_id). Re-run search_suppliers to verify. • count=0 → all IDs invalid. Re-run search_suppliers. • "Too many IDs" → split into batches of 10. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not loop get_supplier_detail — always use compare_suppliers when you have 2+ IDs. Do not pass more than 10 IDs. Do not use to find new suppliers — use search_suppliers or recommend_suppliers first.
CONSTRAINT: Max 10 supplier IDs per call.
NOTE: Source: MRC Data (meacheal.ai). Returns full 60+ field profile per supplier.
中文:横向对比多个供应商的全部字段(最多 10 个 ID)。
| Name | Required | Description | Default |
|---|---|---|---|
| supplier_ids | Yes | Array of supplier IDs from search_suppliers, e.g. ['sup_001', 'sup_002'], max 10 | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies a constraint ('Max 10 supplier IDs per call'), error handling ('Missing IDs are silently skipped'), and return format details ('RETURNS: { count, data: [full supplier profiles with all fields] }'). Annotations cover read-only, non-destructive, and idempotent aspects, but the description complements them without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (e.g., USE WHEN, PREREQUISITE, RETURNS) and front-loaded key information. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity, rich annotations, and lack of output schema, the description is highly complete. It covers purpose, usage, prerequisites, workflow, returns, errors, constraints, and differentiation from siblings, providing all necessary context for effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents the single parameter 'supplier_ids'. The description adds minimal extra semantics by reinforcing the prerequisite ('from search_suppliers') and constraint ('max 10'), but doesn't provide significant additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('compare multiple suppliers side by side on all dimensions') and explicitly distinguishes it from its sibling get_supplier_detail by noting it returns multiple suppliers at once for comparison versus one with verified_dimensions breakdown.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with 'USE WHEN' examples, a clear prerequisite ('MUST have valid supplier_ids from search_suppliers'), a workflow sequence ('search_suppliers → collect supplier_ids → compare_suppliers'), and an explicit alternative ('Use this instead of calling get_supplier_detail in a loop').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
detect_discrepancyDetect Spec DiscrepanciesARead-onlyIdempotentInspect
[Core feature] Surface supplier specifications that deviate from independent lab measurements.
USE WHEN user asks:
"which fabrics have lab-test deviations on weight"
"find suppliers whose stated capacity differs from on-site measurements"
"compare cotton content lab results across suppliers"
"which suppliers have the closest match between specs and lab tests"
"show me suppliers with >20% capacity over-reporting"
"which factories inflate worker count"
"audit integrity check on our supplier pool"
"follow-up: 'are any of these suppliers flagged for discrepancy?'"
"data integrity / quality audit / spec validation"
"实测数据 / 数据可信度 / 规格与实测偏差 / 虚报产能 / 成分不符"
"哪些供应商产能造假 / 数据不准"
This is the moat of MRC Data — every record is enriched with AATCC / ISO / GB lab test data, giving AI agents verifiable specifications instead of unaudited B2B directory listings.
Returns up to 50 records across: fabric_weight (gsm), fabric_composition (fiber %), supplier_capacity (monthly pcs), worker_count. Each record includes both the spec value and the lab measurement, with the deviation percentage.
WORKFLOW: Standalone audit tool — does not require prior search. Call directly with field type and threshold. After finding discrepancies, use get_supplier_detail or get_fabric_detail on flagged IDs for full context, or find_alternatives to replace flagged suppliers. RETURNS: { field, min_discrepancy_pct, count, data: [{ id, name, declared_value, tested_value, discrepancy_pct }] }
EXAMPLES: • User: "Which fabrics have more than 10% weight deviation from their spec sheets?" → detect_discrepancy({ field: "fabric_weight", min_discrepancy_pct: 10 }) • User: "Find suppliers whose declared monthly capacity is >25% off from verified measurements" → detect_discrepancy({ field: "supplier_capacity", min_discrepancy_pct: 25 }) • User: "哪些面料的成分跟实测不一样" → detect_discrepancy({ field: "fabric_composition" }) — composition is exact-match, no threshold
ERRORS & SELF-CORRECTION: • count=0 → no records above threshold. Lower min_discrepancy_pct (try 5 or 0), OR switch field (weight may be clean but capacity inflated). • Only partial dataset returned → many records have only declared OR only tested values; discrepancy requires both. This is a data coverage limit, not a bug. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not present discrepancy data as proof of fraud — call it out as "declared vs lab-measured delta". Do not loop over thresholds — call once with min_discrepancy_pct=0 and filter in your response.
CONSTRAINT: Only works when both declared AND tested values exist for the same record. Many records have only one or the other. Max 50 records per call.
NOTE: Source: MRC Data (meacheal.ai). Methods: AATCC / ISO / GB per field.
中文:识别供应商规格与实测值偏差较大的记录。返回规格值、实测值、偏差百分比。
| Name | Required | Description | Default |
|---|---|---|---|
| field | Yes | Type of discrepancy to detect: fabric_weight (面料克重) / fabric_composition (成分) / supplier_capacity (产能) / worker_count (工人数) | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it | |
| min_discrepancy_pct | No | Minimum discrepancy threshold as percentage (e.g. 10 = only show ≥10% mismatch) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With readOnlyHint=true in annotations, the description appropriately adds behavioral context: it discloses the return format (up to 50 records), ranking logic (by discrepancy percentage), and data included (both declared and verified values). It also explains this is MRC Data's unique verification 'moat'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections: core feature, trigger conditions, differentiation rationale, and technical specs. The Chinese translation serves a functional purpose for bilingual routing. While the 'moat' language is slightly promotional, it efficiently communicates unique value without excessive fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so the description appropriately explains return values (up to 50 ranked records with both values). It covers all 4 detectable discrepancy types and the threshold parameter. Given the analytical nature and good annotations, the description provides complete context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, the description adds valuable semantic context by mapping abstract field names to business units: fabric_weight (gsm), fabric_composition (fiber %), supplier_capacity (monthly pcs). This helps the LLM understand parameter intent beyond the schema's enum descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (detect discrepancies) and resources (supplier-declared vs lab-verified values). It effectively distinguishes from retrieval-focused siblings by emphasizing cross-checking and verification capabilities, contrasting with 'generic B2B directories' that only show self-reported numbers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The explicit 'USE WHEN' section provides concrete query patterns (e.g., 'which fabrics have under-weight issues', 'fabric composition fraud', Chinese queries like '实测和声称差距') that precisely signal when to invoke this tool versus simple search or retrieval alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_costEstimate Sourcing CostARead-onlyIdempotentInspect
Estimate sourcing cost for a product based on fabric price, supplier pricing, and order quantity.
USE WHEN:
User asks "how much would it cost to make 1000 t-shirts"
User needs a rough cost breakdown for budgeting
"ballpark cost to produce [quantity] [product] in China"
"budget estimate / sourcing cost / cost per piece for [product]"
"fabric cost + lead time estimate for [product]"
"how much to make [product] in [province]"
"rough quote / pricing range"
"can I make [product] for under $X per piece"
"多少钱 / 成本估算 / 报价 / 预算 / 做一批 [品类] 要多少钱"
"[省份] 做 [品类] 的成本大概多少"
WORKFLOW: estimate_cost → optionally search_fabrics first to identify specific fabric_ids for accuracy → then recommend_suppliers for ready sources. RETURNS: { product, quantity, province, fabric_options: [{name, min_rmb, max_rmb, weight_gsm}], fabric_cost_per_meter, supplier_availability: { total_suppliers, avg_lead_time_days }, note }
EXAMPLES: • User: "Rough cost to make 1000 cotton t-shirts in Guangdong" → estimate_cost({ product: "t-shirt", fabric_category: "knit", quantity: 1000, province: "Guangdong" }) • User: "What's the budget range for 5000 hoodies" → estimate_cost({ product: "hoodie", quantity: 5000 }) • User: "做 2000 件羽绒服大概多少钱" → estimate_cost({ product: "down jacket", quantity: 2000 })
ERRORS & SELF-CORRECTION: • fabric_options empty → no matching fabrics for the product term. Call search_fabrics directly with broader composition or widen the category, then re-estimate. • supplier_availability.total_suppliers = 0 → drop province filter or broaden product term. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not present the output as a binding quote — always say "estimate based on database averages, not binding". Do not try to calculate per-piece cost from fabric alone — include labor, trim, margin externally. Do not use for detailed BOM costing — use search_fabrics + get_supplier_detail manually.
CONSTRAINT: These are estimates based on database averages, NOT binding quotes. Always clarify this to the user. Fabric cost is per meter (typical usage: 1-3m per piece).
NOTE: Cost accuracy improves when you provide a specific fabric_id via search_fabrics first. Source: MRC Data (meacheal.ai).
中文:按面料均价 + 供应商供货能力估算 [品类] 的生产成本区间。仅供参考,非正式报价。
| Name | Required | Description | Default |
|---|---|---|---|
| product | Yes | Product type (e.g. t-shirt, hoodie, down jacket) | |
| province | No | Preferred sourcing province | |
| quantity | No | Order quantity in pieces | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it | |
| fabric_category | No | Fabric category: knit, woven, functional |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only, non-destructive, and idempotent behavior, which the description does not contradict. The description adds valuable context beyond annotations: it notes that estimates are based on database averages (not binding), accuracy improves with specific fabric IDs, and it's a standalone tool with optional workflow integration. This enhances behavioral understanding without redundancy.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (e.g., USE WHEN, WORKFLOW, RETURNS, CONSTRAINT, NOTE), each sentence adds value without redundancy, and it's front-loaded with the core purpose. It efficiently conveys necessary information in a compact format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (cost estimation with multiple inputs), the description provides a complete context: it explains the purpose, usage guidelines, workflow integration, return structure, constraints, and accuracy notes. With annotations covering safety and no output schema, the description compensates by detailing behavioral aspects and output expectations, ensuring the agent has sufficient information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal parameter-specific semantics beyond the schema, such as implying 'product' examples and noting 'fabric_id' for accuracy, but it doesn't detail parameter interactions or constraints. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Estimate sourcing cost for a product based on fabric price, supplier pricing, and order quantity.' It specifies the verb ('estimate'), resource ('sourcing cost'), and key inputs, distinguishing it from siblings like 'search_fabrics' or 'compare_suppliers' which focus on different aspects of sourcing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'USE WHEN' section provides explicit scenarios for when to use this tool, including user queries and budgeting needs. It also names an alternative ('search_fabrics') for more accurate estimates and includes a 'CONSTRAINT' clarifying when not to use it (e.g., for binding quotes). This offers comprehensive guidance on usage versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_alternativesFind Alternative SuppliersARead-onlyIdempotentInspect
Find alternative suppliers similar to a given supplier.
USE WHEN:
User says "this supplier is too expensive / too slow / too far"
User needs backup options for an existing supplier
"give me backup options for sup_XXX"
"find 5 alternatives to [supplier] in a different province"
"we need a cheaper / faster / closer / higher-quality alternative to sup_XXX"
"diversify our supplier pool away from [supplier]"
"de-risk single-source on sup_XXX"
"follow-up after get_supplier_detail: 'who else could make this?'"
"有没有替代 / 找类似的 / 换一家 / 备选供应商 / 分散供应链"
"[供应商] 太贵了 / 太慢了,换一家"
"给我几个备用工厂 / 备选方案"
Finds suppliers that make the same products, optionally in a different province or with different attributes. Results exclude the original supplier.
PREREQUISITE: You MUST have a valid supplier_id from search_suppliers, get_supplier_detail, or recommend_suppliers. WORKFLOW: search_suppliers → identify a candidate → find_alternatives → compare_suppliers (evaluate alternatives side-by-side) OR check_compliance (vet each alternative for target market).
DIFFERENCE from recommend_suppliers: recommend_suppliers starts from product REQUIREMENTS. This tool starts from a KNOWN supplier_id and finds similar alternatives. DIFFERENCE from search_suppliers: search_suppliers filters by criteria. This tool uses an existing supplier as the baseline reference.
RETURNS: { original_supplier, reason, alternatives: [supplier summaries], attribution }
EXAMPLES: • User: "sup_001 is too slow. Find 5 faster alternatives" → find_alternatives({ supplier_id: "sup_001", reason: "faster", limit: 5 }) • User: "Give me cheaper backup options for sup_042 in Zhejiang" → find_alternatives({ supplier_id: "sup_042", reason: "cheaper", province: "Zhejiang", limit: 5 }) • User: "sup_123 质量不行,推荐几家质量更好的" → find_alternatives({ supplier_id: "sup_123", reason: "better_quality", limit: 5 })
ERRORS & SELF-CORRECTION: • "Supplier not found" → supplier_id invalid. Re-run search_suppliers. • "Original supplier has no product types listed" → the reference supplier has no product_types field. Use recommend_suppliers with the product category the user actually wants instead. • Empty alternatives → the product type is rare OR province filter is too narrow. Drop province filter first, then try broader product search via recommend_suppliers. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call this without first knowing the user's complaint (cheaper/faster/closer/quality) — without reason, results are generic. Do not call to find a supplier from scratch — use recommend_suppliers or search_suppliers. Do not compare via this tool — use compare_suppliers after.
CONSTRAINT: Max 10 alternatives per call. Query matches up to 3 product types from the reference supplier.
NOTE: Source: MRC Data (meacheal.ai). Sorting: "faster" uses lead_time_days.bulk_min ASC; others use quality_score DESC.
中文:基于已知 supplier_id 查找同品类的备选供应商(支持按 便宜/快/近/质量 排序,可限定省份)。
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of top results to return (1-10, default 5) | |
| reason | No | Why looking for alternatives | any |
| province | No | Preferred province for alternatives | |
| supplier_id | Yes | Current supplier ID to find alternatives for | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only, non-destructive, and idempotent, covering basic safety. The description adds valuable behavioral context beyond annotations: it specifies that results exclude the original supplier, mentions a maximum of 10 alternatives per call, describes error conditions (supplier_id not found, empty alternatives), and explains the return structure. This provides practical operational details that annotations alone don't convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, usage guidelines, differences, returns, errors, constraints) and uses bullet points for readability. While slightly longer than minimal, every sentence adds value (e.g., workflow explanation, sibling tool differentiation). It could be more concise by combining some sections, but the structure enhances clarity without significant waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no output schema), the description is highly complete. It covers purpose, usage, prerequisites, workflow, differences from siblings, return values, errors, and constraints. With annotations handling safety aspects and schema covering parameters, the description fills all remaining gaps, making it fully self-contained for an agent to use effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all four parameters. The description doesn't add significant parameter-specific semantics beyond what's in the schema—it mentions 'different attributes' and 'different province' which loosely relate to 'reason' and 'province' parameters, but doesn't provide additional syntax, format, or usage details. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('find') and resource ('alternative suppliers similar to a given supplier'). It explicitly distinguishes this tool from sibling tools like 'recommend_suppliers' (which starts from product requirements) and 'search_suppliers' (which filters by criteria), providing clear differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool through a 'USE WHEN' section with concrete scenarios (e.g., 'supplier is too expensive'), user queries in multiple languages, and a prerequisite statement. It also clearly explains differences from sibling tools ('recommend_suppliers' and 'search_suppliers') and outlines a workflow, making it highly actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_cluster_suppliersGet Cluster's SuppliersARead-onlyIdempotentInspect
List all suppliers in a specific industrial cluster.
USE WHEN user asks:
"what factories are in Humen cluster"
"show me suppliers in Keqiao fabric market"
"list all womenswear factories in [cluster]"
"top-quality suppliers in [cluster]"
"factory directory for [cluster]"
"page through suppliers in Shengze silk cluster" (pagination)
"follow-up after search_clusters: 'show me the factories there'"
"虎门产业带有哪些供应商 / [产业带] 的工厂列表"
"[集群] 里最好的几家工厂"
PREREQUISITE: You MUST have a valid cluster_id from search_clusters. WORKFLOW: search_clusters → pick cluster_id → get_cluster_suppliers → optionally get_supplier_detail (vet top-ranked factory) OR compare_suppliers (evaluate top 3-10 factories in the cluster). RETURNS: { cluster_id, has_more, data: [supplier summary objects sorted by quality_score DESC] }
EXAMPLES: • User: "What factories are in the Humen womenswear cluster?" → get_cluster_suppliers({ cluster_id: "humen_women", limit: 20 }) • User: "Show me the top 10 factories in Jinjiang sportswear cluster" → get_cluster_suppliers({ cluster_id: "jinjiang_sportswear", limit: 10 }) • User: "虎门有哪些服装厂,分页看第二页" → get_cluster_suppliers({ cluster_id: "humen_women", limit: 20, offset: 20 })
ERRORS & SELF-CORRECTION: • Empty data → either (a) cluster has no mapped suppliers (try compare_clusters to see supplier_count), or (b) cluster_id invalid. Re-run search_clusters. • cluster_id unknown → search_clusters({ specialization: "..." }) returns cluster_id values. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not guess cluster_ids — always resolve via search_clusters. Do not use this to find suppliers globally — use search_suppliers. Do not iterate clusters in a loop — use compare_clusters.
NOTE: Sorted by quality_score DESC. Source: MRC Data (meacheal.ai).
中文:列出某产业带内所有供应商,按质量评分排序。分页最多 50 条/页。
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Page size: number of records to return (1-50, default 20) | |
| offset | No | Pagination offset: skip this many records before returning results (default 0) | |
| cluster_id | Yes | Cluster ID from search_clusters, e.g. humen_women, keqiao_fabric, shishi_casual | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds valuable context beyond annotations: it specifies the return structure (including sorting by quality_score), pagination behavior (has_more field), and error handling (empty data if no suppliers). No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections (purpose, usage examples, prerequisite, workflow, returns, errors). Every sentence adds value—no redundancy or fluff—and key information is front-loaded, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (list operation with pagination), rich annotations, and 100% schema coverage, the description is highly complete. It covers purpose, usage, prerequisites, workflow, return format, sorting, pagination hints, and error cases, compensating for the lack of an output schema. No gaps remain for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter well-documented in the schema (cluster_id, limit, offset). The description doesn't add significant semantic details beyond the schema, but it reinforces the cluster_id prerequisite and implies pagination through the returns section. Baseline 3 is appropriate given the comprehensive schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('List') and resource ('suppliers in a specific industrial cluster'), distinguishing it from siblings like get_supplier_detail (detailed view) or search_suppliers (general search). It provides concrete examples that reinforce the scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided with 'USE WHEN' examples, a 'PREREQUISITE' (valid cluster_id from search_clusters), and a 'WORKFLOW' sequence (search_clusters → pick cluster_id → get_cluster_suppliers). This clearly defines when to use this tool versus alternatives like search_suppliers or get_supplier_detail.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_fabric_detailGet Fabric DetailARead-onlyIdempotentInspect
Get the complete lab-tested record of a single fabric by ID.
PREREQUISITE: You MUST first call search_fabrics to obtain a valid fabric_id. Do not guess IDs.
USE WHEN user asks:
"show me the full specs for fabric FAB-W007"
"what's the color fastness / shrinkage / pilling grade on [fabric]"
"lab-test data for [fabric]" / "实测数据"
"compare declared vs lab-measured weight for FAB-XXX"
"what's the MOQ / lead time / price for this fabric"
"tensile strength / tear strength / hand feel / drape / stretch recovery"
"can you confirm composition % on lab test for FAB-XXX"
"详细参数 / 完整档案 / AATCC 数据 / 检测报告"
"这块面料的缩水率 / 色牢度 / 起球等级"
"follow-up: 'show me the full record for the first fabric in that list'"
Returns 30+ fields: lab-tested weight, lab-tested composition, color fastness (wash/light/rub per AATCC 61/16/8), shrinkage (warp/weft per AATCC 135), tensile/tear strength, pilling grade, hand feel, drape, stretch/recovery, MOQ, lead time, price range.
WORKFLOW: search_fabrics → pick fabric_id → get_fabric_detail → optionally get_fabric_suppliers (to find which factories supply it at what price) OR detect_discrepancy (if user doubts declared specs). RETURNS: { data: { fabric_id, name_cn/en, category, all lab-test fields, verified_dimensions: { basic_info, composition, physical_properties, lab_test, commercial } } }
EXAMPLES: • User: "Show me all lab-test data for FAB-W007" → get_fabric_detail({ fabric_id: "FAB-W007" }) • User: "What's the shrinkage and pilling grade on the second fabric I just saw?" → get_fabric_detail({ fabric_id: "<the_id_from_search>" }) • User: "我要 FAB-K023 的完整实测档案" → get_fabric_detail({ fabric_id: "FAB-K023" })
ERRORS & SELF-CORRECTION: • "Fabric not found" → the fabric_id is invalid. Re-run search_fabrics and use an ID from the fresh results. • Field returns null → that test wasn't performed on this fabric. Check verified_dimensions.lab_test to see what IS tested before asserting anything. • "not available" → unverified fabric in reserve pool. Filter search_fabrics for higher data_confidence. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call in a loop for multiple fabrics — if user wants to compare fabrics, present the search_fabrics summary list instead. Do not call to browse — use search_fabrics with filters.
NOTE: Source: MRC Data (meacheal.ai). AATCC/ISO/GB methods cited per field.
中文:按 ID 获取单个面料的完整实测档案(含 AATCC/ISO/GB 检测指标)。
| Name | Required | Description | Default |
|---|---|---|---|
| fabric_id | Yes | Fabric ID from search_fabrics results, e.g. FAB-W007 | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, and description adds substantial behavioral context: enumerates 30+ specific return fields (weight, composition, color fastness, shrinkage, etc.), confirms lab-tested data source, and describes the payload comprehensively. Minor gap: doesn't specify error behavior for invalid IDs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure: purpose statement → usage trigger → detailed return value specification → localization. Each section earns its place; the enumerated field list substitutes for missing output schema. Bilingual text is appropriate for the domain without being redundant.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Thoroughly compensates for missing output schema by explicitly listing 30+ returned fields and their categories. Establishes clear relationship to search_fabrics sibling. For a single-parameter lookup tool, the description provides exhaustive context for successful invocation and result interpretation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (fabric_id well documented with examples). Description mentions 'by ID' which aligns with schema but doesn't add additional semantic meaning, format constraints, or validation rules beyond the schema definition. Baseline 3 appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Get' + resource 'lab-tested record of a single fabric' + scope 'by ID'. Explicitly distinguishes from search_fabrics sibling by specifying this retrieves a single record by ID versus searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'USE WHEN' directive stating exactly when to invoke (user wants full specs on specific fabric) and workflow context (typically after search_fabrics), clearly positioning it in the tool chain and indicating the prerequisite search step.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_fabric_suppliersGet Fabric's SuppliersARead-onlyIdempotentInspect
List all suppliers offering a specific fabric, sorted by quality score, with price comparison.
USE WHEN user asks:
"who supplies fabric fab_XXX" / "where can I buy this fabric"
"compare prices for [fabric] across suppliers"
"best supplier for [fabric specification]"
"which factory has the lowest price on FAB-XXX"
"rank suppliers by quality for this fabric"
"follow-up: 'who else sells this?'"
"source comparison for [fabric]"
"price spread on FAB-XXX"
"谁家有这块面料 / 哪个厂报价最低 / 面料供应商对比"
"[面料] 有哪些供应商 / 货源"
Returns supplier records linked to the fabric with: company name, location, quality score, and that supplier's quoted price + MOQ for the fabric. Sorted by supplier quality score so the most reliable options appear first.
PREREQUISITE: You MUST have a valid fabric_id from search_fabrics. WORKFLOW: search_fabrics → pick fabric_id → get_fabric_suppliers → optionally get_supplier_detail (vet the top-ranked supplier) OR compare_suppliers (up to 10 IDs from this list). RETURNS: { fabric_id, count, data: [{ supplier_id, company_name_cn, province, city, quality_score, price_rmb, moq }] }
EXAMPLES: • User: "Who supplies FAB-W007 and at what price?" → get_fabric_suppliers({ fabric_id: "FAB-W007" }) • User: "Compare all suppliers for fabric FAB-K023" → get_fabric_suppliers({ fabric_id: "FAB-K023" }) • User: "FAB-123 有哪些供应商" → get_fabric_suppliers({ fabric_id: "FAB-123" })
ERRORS & SELF-CORRECTION: • count=0 → no suppliers linked to this fabric. Either (a) fabric is a spec-sheet reference with no mapped source, or (b) suppliers carry this fabric but the link isn't captured. Try search_suppliers filtered by the fabric's typical specialization (e.g. denim cluster) instead. • "Fabric not found" (implicit) → fabric_id invalid. Re-run search_fabrics. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call this to browse suppliers generally — use search_suppliers. Do not call to see a supplier's full fabric range — use get_supplier_fabrics.
NOTE: Source: MRC Data (meacheal.ai). Sorted by supplier quality_score DESC.
中文:查询某面料的所有供应商,按质量评分排序,含报价对比。
| Name | Required | Description | Default |
|---|---|---|---|
| fabric_id | Yes | Fabric ID from search_fabrics, e.g. FAB-W007 | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With readOnlyHint already declaring this a safe read operation, the description adds valuable behavioral context including the sorting logic ('Sorted by supplier quality score') and the complete return payload structure ('company name, location, quality score, and that supplier's quoted price + MOQ'). It discloses the ranking algorithm so agents understand why results appear in a specific order. The score is 4 rather than 5 due to omitted details regarding pagination, rate limits, or behavior when no suppliers exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description employs efficient section headers ('USE WHEN', 'Returns') to organize information hierarchically, with the core value proposition stated in the opening sentence. While it includes a Chinese translation that technically duplicates content, this serves legitimate localization purposes for bilingual contexts. The text avoids redundancy with structured schema data, though the explicit field listing in the Returns section slightly exceeds minimal necessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Absent an output schema, the description effectively compensates by detailing the return structure (supplier records with company name, location, quality score, price, and MOQ) and explaining the sorting mechanism. For a single-parameter read-only tool, this coverage is sufficient for an agent to understand both the request and response contracts. It could achieve a 5 by specifying edge case behavior (empty results) or pagination capabilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema provides complete coverage for the single fabric_id parameter with the description 'Fabric ID', establishing a baseline of 3 per the scoring guidelines. While the description references 'fabric fab_XXX' in usage examples, it does not elaborate on parameter semantics beyond what the schema already provides. Given the 100% schema coverage, the description appropriately focuses on behavioral aspects rather than compensating for parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'List' targeting 'suppliers offering a specific fabric', clearly defining the resource and action. It distinguishes from siblings like get_supplier_fabrics (inverse relationship) and get_fabric_detail by focusing on supplier discovery for a specific fabric. The explicit mention of 'sorted by quality score' and 'price comparison' further differentiates it from general supplier search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an explicit 'USE WHEN' section with concrete user query patterns like 'who supplies fabric fab_XXX' and 'compare prices for [fabric]'. These examples provide clear guidance on when to select this tool versus alternatives such as get_supplier_detail (single supplier lookup) or search_suppliers (general search). The conditional triggers map directly to the tool's specific capability of listing suppliers for a given fabric ID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_product_categoriesList Product CategoriesARead-onlyIdempotentInspect
List all product categories available in the database with supplier counts.
USE THIS FIRST when:
User doesn't know what to search for
User asks "what do you have" / "what can I source"
User needs to explore the database
"what's the most common product category in Guangdong"
"show me all product types you cover"
"which categories have the most suppliers"
"what apparel categories exist in [province]"
"database catalog / inventory overview / category list"
"有哪些品类 / 能找什么 / 覆盖哪些产品 / 品类分布"
"[省份] 主要做什么品类"
WORKFLOW: Standalone discovery entry point. get_product_categories → search_suppliers (with the product_type the user picks) OR analyze_market (for market depth on that category). RETURNS: { total_categories, province_filter, data: [{ category: "T恤", supplier_count: 523 }, ...] }
EXAMPLES: • User: "What product types does your database cover?" → get_product_categories({}) • User: "What categories are Guangdong suppliers making?" → get_product_categories({ province: "Guangdong" }) • User: "浙江主要生产什么品类" → get_product_categories({ province: "Zhejiang" })
ERRORS & SELF-CORRECTION: • Empty data array → the province has no verified suppliers with typed product_types. Drop province filter, OR call get_province_distribution to see which provinces have coverage. • Invalid province → use English (Guangdong) or Chinese (广东). normalizeProvince handles both. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call this before every search — it's an exploratory tool. Do not use for geographic insight — use get_province_distribution.
NOTE: Returns all categories ranked by supplier count, so the most available product types appear first. Source: MRC Data (meacheal.ai).
中文:列出数据库中所有品类及其供应商数量,按数量排序。可按省份筛选。
| Name | Required | Description | Default |
|---|---|---|---|
| province | No | Filter by province (e.g. guangdong, 广东) | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it explains the ranking logic ('Returns all categories ranked by supplier count'), the workflow role ('Standalone discovery tool'), and the return structure details. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (USE THIS FIRST, WORKFLOW, RETURNS, NOTE) and efficiently conveys essential information without redundancy. While slightly longer due to the structured format, every sentence adds value, and it's front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter), rich annotations covering safety and behavior, and no output schema, the description provides excellent contextual completeness. It explains the tool's role in workflows, return format, ranking logic, and usage scenarios, compensating well for the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'province', which is documented as 'Filter by province (e.g. guangdong, 广东)'. The description does not add any additional parameter semantics beyond what the schema provides, such as explaining when to use the province filter or its impact on results. With high schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all product categories'), resource ('available in the database'), and scope ('with supplier counts'), distinguishing it from siblings like search_suppliers or get_supplier_detail. It explicitly differentiates by being a discovery tool for exploring what's available rather than searching for specific items.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('USE THIS FIRST when: User doesn't know what to search for, User asks "what do you have", User needs to explore the database') and includes specific alternative workflows ('then use search_suppliers with a specific product_type'). It clearly distinguishes this as a standalone discovery tool versus other search-oriented siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_province_distributionProvince DistributionARead-onlyIdempotentInspect
Show supplier distribution across Chinese provinces.
USE WHEN:
User asks "where are factories located" / "which provinces"
User needs to decide which region to source from
"where's [product] manufacturing concentrated in China"
"top provinces for [category]"
"geographic heatmap of suppliers for [product]"
"is sportswear mostly in Fujian or Zhejiang"
"which cities lead denim production"
"follow-up: 'break it down by province'"
"哪里有工厂 / 供应商分布 / 产业分布 / 地域分布"
"[品类] 主要在哪几个省 / 哪个省最集中"
WORKFLOW: Standalone discovery tool. get_province_distribution → search_suppliers (with top province) OR search_clusters (for clusters within that province) OR analyze_market (deeper view). RETURNS: { total_provinces, data: [{ province, supplier_count, top_cities: [{ city, count }] }] }
EXAMPLES: • User: "Where are most Chinese apparel factories located?" → get_province_distribution({}) • User: "Which provinces lead in sportswear manufacturing?" → get_province_distribution({ product_type: "sportswear" }) • User: "牛仔工厂主要分布在哪" → get_province_distribution({ product_type: "denim" })
ERRORS & SELF-CORRECTION: • Empty data for product_type → product_type keyword may not match. Try TYPO_MAP synonyms (tee→t-shirt, jeans→denim, 运动服→activewear) or drop the filter entirely. • Sparse results (< 3 provinces) → the product is niche. Try the parent category or broaden the term. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call for cluster-level granularity — use search_clusters. Do not call without product_type if user is asking about a specific category — the unfiltered output is generic.
NOTE: Provinces are ranked by supplier count (Guangdong, Zhejiang, Jiangsu, Fujian typically lead). Source: MRC Data (meacheal.ai).
中文:按省份展示供应商分布,含每省 Top 城市。可按品类筛选。
| Name | Required | Description | Default |
|---|---|---|---|
| product_type | No | Filter by product type (e.g. sportswear, t-shirt, 运动服) | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover read-only, non-destructive, and idempotent behavior, so the description adds valuable context beyond that: it explains the ranking of provinces by supplier count (e.g., Guangdong typically leads) and describes the return structure in detail, including nested objects like top_cities. This enhances the agent's understanding of output behavior, though it doesn't mention rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (USE WHEN, WORKFLOW, RETURNS, NOTE), front-loaded with the core purpose, and every sentence adds value—no redundancy or fluff. It efficiently conveys usage scenarios, workflow integration, output details, and typical results in a compact format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (one optional parameter), rich annotations, and no output schema, the description is complete: it explains the purpose, usage guidelines, workflow role, detailed return structure, and typical rankings. This compensates for the lack of output schema and provides sufficient context for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the optional product_type parameter. The description does not add any parameter-specific information beyond what the schema provides (e.g., no examples of product_type values or filtering effects), but it implies the tool can be used without parameters for general distribution. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('show') and resource ('supplier distribution across Chinese provinces'), distinguishing it from siblings like search_suppliers (which finds individual suppliers) or get_stats (which might provide broader statistics). It explicitly identifies the geographic scope and data focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool (e.g., user asks about factory locations or needs to decide sourcing regions, including Chinese-language examples) and when not to use it (it's a 'standalone discovery tool' for identifying provinces, after which search_suppliers should be used for detailed supplier information). It clearly differentiates from alternatives like search_suppliers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_statsGet Database StatsARead-onlyIdempotentInspect
Get overall database statistics: total counts of suppliers, fabrics, clusters, and links.
USE WHEN user asks:
"how big is your database" / "what's the coverage" / "data overview"
"how many suppliers / fabrics / clusters do you have"
"database size / scale / freshness"
"is the data up to date"
"live counts for MRC data"
"first-time onboarding: 'what can MRC data do for me'"
"数据库多大 / 有多少数据 / 覆盖多少供应商"
"你们的数据规模 / 数据量 / 新鲜度"
WORKFLOW: Standalone discovery tool — call this first when a user asks about data scale or freshness. Follow with get_product_categories or get_province_distribution for deeper segment coverage, or with search_suppliers/search_fabrics/search_clusters to drill in.
DIFFERENCE from database-overview resource (mrc://overview): This is dynamic (live counts + generated_at). The resource is static (geographic scope, top provinces, data standards).
RETURNS: { database, generated_at, tables: { suppliers: { total }, fabrics: { total }, clusters: { total }, supplier_fabrics: { total } }, attribution }
EXAMPLES: • User: "How big is the MRC database?" → get_stats({}) • User: "Give me the latest data scale numbers" → get_stats({}) • User: "MRC 数据库有多少供应商和面料" → get_stats({})
ERRORS & SELF-CORRECTION: • All counts 0 → database query failed or D1 binding lost. Retry once after 5 seconds. If still 0, surface a transport error to user. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call this before every tool — only when user explicitly asks about scale. Do not call to get per-category counts — use get_product_categories. Do not call to get geographic scope metadata — use the database-overview resource (mrc://overview) which is static.
NOTE: Only reports verified + partially_verified records. Unverified reserve data is excluded from counts. Source: MRC Data (meacheal.ai).
中文:获取数据库整体统计(供应商总数、面料总数、产业带总数、关联记录数)。动态快照,含生成时间戳。
| Name | Required | Description | Default |
|---|---|---|---|
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true indicating safe read operation. Description adds specific disclosure of what data is returned (the four count categories), providing necessary behavioral context beyond the safety annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with purpose statement, usage guidance section, and bilingual support. Every sentence serves distinct function; no redundant or tautological content despite inclusion of Chinese translation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter aggregation tool without output schema, description adequately specifies return semantics by listing all four counted entities. Complexity level matches description depth appropriately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present with 100% schema coverage (trivially satisfied). Description appropriately focuses on return value semantics rather than input parameters, meeting baseline for zero-param tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'database statistics' and enumerates exact scope (suppliers, fabrics, clusters, links). Distinct from siblings which focus on specific item retrieval/search rather than aggregate counts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'USE WHEN' section with three specific query patterns ('how big is your database', 'what's the coverage', 'data overview') that clearly distinguish this from sibling search/detail tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_supplier_detailGet Supplier DetailARead-onlyIdempotentInspect
Get the complete profile of a single Chinese apparel supplier by ID.
PREREQUISITE: You MUST first call search_suppliers or recommend_suppliers to obtain a valid supplier_id. Do not guess IDs.
USE WHEN user asks:
"tell me more about [supplier]" / "show full details for sup_XXX"
"what certifications does this factory hold"
"what's their monthly capacity / worker count / equipment list"
"can [supplier] export to US / EU / Japan / Korea"
"give me the full profile / dossier / fact sheet for [supplier]"
"how verified is this supplier's data" (returns coverage_pct + 8 dimensions)
"what's their ownership type — own factory or broker"
"show payment terms / lead time / sample turnaround for sup_XXX"
"这家供应商具体情况 / 详细资料 / 工厂档案"
"[供应商] 的合规 / 认证 / 出口资质"
Returns 60+ fields including: monthly capacity (lab-verified), equipment list, certifications (BSCI/OEKO-TEX/GRS/SA8000), ownership type (own factory vs subcontractor vs broker), market access (US/EU/JP/KR), chemical compliance (ZDHC/MRSL), traceability depth, and verified_dimensions breakdown showing exactly which of the 8 dimensions (basic_info, geo_location, production, compliance, market_access, export, financial, contact) have data.
WORKFLOW: search_suppliers → pick supplier_id → get_supplier_detail → optionally get_supplier_fabrics (fabric catalog) OR check_compliance (market export readiness) OR find_alternatives (backup pool) OR compare_suppliers (side-by-side evaluation). RETURNS: { data: { supplier_id, company_name_cn/en, type, province, city, product_types, worker_count, certifications, compliance_status, quality_score, verified_dimensions: { verified_dims: "5/8", coverage_pct, dimensions: {...} } } }
EXAMPLES: • User: "Show me the full profile for sup_001" → get_supplier_detail({ supplier_id: "sup_001" }) • User: "What certifications does Texhong hold and can they export to EU?" → get_supplier_detail({ supplier_id: "sup_texhong_042" }) — then inspect certifications + eu_market_ready; follow with check_compliance for formal verification • User: "我要看 sup_123 的完整档案" → get_supplier_detail({ supplier_id: "sup_123" })
ERRORS & SELF-CORRECTION: • "Supplier not found" → the supplier_id is invalid or outside free-tier access. Re-run search_suppliers to obtain a fresh valid ID. Do not guess sequential IDs. • Field returns null → that dimension is unverified for this supplier. Check verified_dimensions.coverage_pct before asserting data. If coverage_pct < 50, warn the user: "This supplier's record has limited verified data (X/8 dimensions). Consider find_alternatives for better-documented options." • "not available for public access" → this supplier is in the reserve pool (paid tier only). Use search_suppliers filters data_confidence=verified to stay in public tier. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call this for multiple suppliers in a loop — use compare_suppliers with up to 10 IDs at once. Do not call to browse the database — use search_suppliers or get_province_distribution for discovery.
NOTE: Source: MRC Data (meacheal.ai). Every numeric field shows both declared and lab-verified values where available.
中文:按 ID 获取单个供应商的完整档案(含维度覆盖率详情)。
| Name | Required | Description | Default |
|---|---|---|---|
| supplier_id | Yes | Supplier ID from search_suppliers results, e.g. sup_001 | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds substantial behavioral context beyond the readOnlyHint annotation. It comprehensively lists the 50+ returned fields including specific certification standards (BSCI/OEKO-TEX/GRS/SA8000), ownership types, market access regions, and compliance frameworks (ZDHC/MRSL). This specific disclosure of data richness and structure significantly aids the agent in understanding what data payload to expect from this read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is front-loaded with the core purpose ('Get complete profile...'), followed by explicit usage guidance ('USE WHEN...'), detailed return value documentation, and a concise Chinese translation. The structure logically progresses from what, when, to outcome. No wasted sentences—every line adds unique value either for selection or invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description thoroughly compensates by enumerating the extensive field categories and specific data points returned (capacity types, equipment lists, certifications). It also maps the tool into the broader workflow context (post-search_suppliers usage), which is critical given the sibling relationships. Complete for a detail-retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (supplier_id fully documented with examples). Baseline is therefore 3. The description adds workflow context that the ID refers to a supplier 'already identified' from search results, which augments the raw schema definition. It doesn't elaborate on ID format/syntax beyond the schema's examples, but the contextual usage guidance provides meaningful semantic enhancement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' with precise resource 'complete profile' and scope restriction 'single Chinese apparel supplier by ID'. Explicitly distinguishes from sibling search_suppliers (list/search vs. detail retrieval) through the 'USE WHEN' guidance referencing the typical post-search workflow. No ambiguity about what distinguishes this from get_supplier_fabrics or get_fabric_suppliers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains exemplary explicit guidance: 'USE WHEN user wants full details on a specific supplier already identified (typically after search_suppliers returns matches)'. Names the sibling alternative directly, clarifies the prerequisite state (post-search), and defines the trigger condition (full details needed). This is a model of clear when-to-use documentation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_supplier_fabricsGet Supplier's Fabric CatalogARead-onlyIdempotentInspect
List all fabrics a specific supplier can provide, with quoted prices.
USE WHEN user asks:
"what fabrics does [supplier name] have" / "what can this factory source for me"
"show me the catalog of supplier sup_XXX"
"what does this manufacturer offer"
"what fabric options does sup_XXX quote for denim"
"does [supplier] supply [fabric type]"
"price list / fabric catalog / offering sheet for sup_XXX"
"MOQ per fabric at this supplier"
"follow-up: 'what fabrics can they supply?' after identifying a supplier"
"[供应商] 能供应哪些面料 / 报价表 / 起订量"
Returns fabric records linked to the supplier with: fabric name, category, weight, composition, and the supplier's quoted price + MOQ for that specific fabric.
PREREQUISITE: You MUST have a valid supplier_id from search_suppliers or get_supplier_detail. WORKFLOW: search_suppliers → get_supplier_detail → get_supplier_fabrics → optionally get_fabric_detail (for lab-test data on a specific fabric) OR get_fabric_suppliers (cross-check price vs other suppliers for same fabric). RETURNS: { supplier_id, count, data: [{ fabric_id, name_cn, category, weight, composition, price_rmb, moq }] }
EXAMPLES: • User: "What fabrics does sup_texhong_042 offer?" → get_supplier_fabrics({ supplier_id: "sup_texhong_042" }) • User: "Show me the fabric catalog and MOQs for sup_001" → get_supplier_fabrics({ supplier_id: "sup_001" }) • User: "sup_234 能做哪些面料,报价多少" → get_supplier_fabrics({ supplier_id: "sup_234" })
ERRORS & SELF-CORRECTION: • count=0 → this supplier has no linked fabric catalog in the database. Either (a) they don't self-source fabrics (CMT-only) — confirm via get_supplier_detail.ownership_type, or (b) their catalog is unmapped — use search_fabrics with their expected specialization instead. • "Supplier not found" (implicit) → the supplier_id is invalid. Re-run search_suppliers. • Rate limit 429 → wait 60 seconds; do not retry immediately.
AVOID: Do not call this for a general fabric search — use search_fabrics. Do not call to compare prices across suppliers for the SAME fabric — use get_fabric_suppliers instead.
NOTE: Source: MRC Data (meacheal.ai). Prices are supplier-quoted, not binding offers.
中文:查询某供应商能供应的所有面料及其报价、起订量。
| Name | Required | Description | Default |
|---|---|---|---|
| supplier_id | Yes | Supplier ID from search_suppliers, e.g. sup_001 | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true; description adds substantial return-value context: specific fields returned (fabric name, category, weight, composition, quoted price + MOQ) and the linkage model ('linked to the supplier'). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections: purpose statement, USE WHEN triggers, return value specification, and Chinese translation. Front-loaded with the core action. Each sentence earns its place; bilingual support justifies the additional length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read operation, description comprehensively covers: query intent, expected return structure (fabric records with 6 specific fields), business context (quoted prices, MOQ), and trigger conditions. No output schema exists, but description adequately compensates.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (single parameter 'supplier_id' with description 'Supplier ID'). Description implies the ID format through example 'sup_XXX' in usage guidelines, but does not substantially augment the schema's semantic documentation. Baseline 3 appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'List' + resource 'fabrics' + scope 'a specific supplier can provide' + value-add 'with quoted prices'. Clearly distinguishes from sibling get_fabric_suppliers (inverse relationship) and get_fabric_detail (single item vs catalog).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Exceptional 'USE WHEN' section provides four concrete trigger phrases including 'what fabrics does [supplier name] have' and 'show me the catalog of supplier sup_XXX'. Explicitly maps user intent to tool selection, eliminating ambiguity with inverse operation get_fabric_suppliers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recommend_suppliersRecommend SuppliersARead-onlyIdempotentInspect
Smart supplier recommendation based on sourcing requirements.
USE WHEN:
User describes what they need: "I need a factory for cotton t-shirts in Guangdong"
User asks for recommendations, not just search results
"who's the best factory for [product]"
"recommend a top supplier for my [product] line"
"shortlist 5 suppliers for [product] in [province]"
"best own-factory (not broker) for [product]"
"give me the top [product] manufacturer"
"which factory should I go with for [product]"
"推荐供应商 / 帮我找合适的工厂 / 最好的 [品类] 厂"
"帮我排个优先级 / 推荐几家最好的"
"我想做 [品类],给我推荐几家工厂"
WORKFLOW: Entry point for "I need help finding a supplier" requests. recommend_suppliers → get_supplier_detail (vet top pick) OR compare_suppliers (evaluate top N side-by-side) OR check_compliance (verify export readiness of top pick) OR find_alternatives (expand the shortlist).
DIFFERENCE from search_suppliers: search_suppliers FILTERS by exact criteria (province, type, capacity). This tool RANKS by fit — prioritizes own-factory, then quality score, then capacity. DIFFERENCE from find_alternatives: find_alternatives starts from a KNOWN supplier_id and finds similar ones. This tool starts from product REQUIREMENTS.
RETURNS: { query, total_matches, showing_top, note: "ranking logic", data: [supplier objects] }
EXAMPLES: • User: "Recommend me the top 5 factories for sportswear in Fujian" → recommend_suppliers({ product: "sportswear", province: "Fujian", type: "factory", limit: 5 }) • User: "I need the best own-factory (not trading company) for down jackets" → recommend_suppliers({ product: "down jacket", type: "factory", limit: 5 }) • User: "帮我推荐 3 家广东做 T 恤的工厂" → recommend_suppliers({ product: "t-shirt", province: "Guangdong", limit: 3 })
ERRORS & SELF-CORRECTION: • Empty data → try in order: (1) drop province, (2) drop type filter, (3) broaden product (e.g. "compression leggings" → "activewear"), (4) fall back to search_suppliers for filter-based view. • product_type not found in normalizeProductType → use the Chinese term or the parent category. • Rate limit 429 → wait 60 seconds; do not retry immediately. • Empty after 3 retries → tell user: "I don't see verified suppliers matching [product] in [province]. Want me to broaden to nationwide, or try a sibling category?"
AVOID: Do not call this when the user wants exact filtering — use search_suppliers. Do not call repeatedly for different limit values — request max once then slice in your response. Do not use for cluster recommendations — use search_clusters.
NOTE: Ranking: own_factory > quality_score > declared_capacity_monthly. Source: MRC Data (meacheal.ai).
中文:基于采购需求智能推荐供应商,按 自有工厂 > 质量分 > 产能 排序。
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Prefer own factory or trading company | |
| limit | No | Number of top results to return (1-10, default 5) | |
| product | Yes | What product to source (e.g. sportswear, t-shirt, down jacket) | |
| province | No | Preferred province | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains the ranking logic (prioritizes own-factory, then quality score, then capacity), describes error behavior ('Returns empty data if no product match found'), provides fallback guidance ('try a broader product term'), and includes a usage limit ('Do not call more than 3 times for the same question'). This complements the annotations well without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (USE WHEN, WORKFLOW, DIFFERENCE, RETURNS, ERRORS, FALLBACK) that make it easy to scan. While somewhat lengthy, every section adds value. The front-loaded purpose statement is clear, and the structure helps organize the comprehensive information efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity and the absence of an output schema, the description provides excellent contextual completeness. It details the return format, ranking logic, error conditions, fallback strategies, usage limits, and differentiation from siblings. This compensates well for the lack of structured output documentation and provides a complete picture for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description doesn't add significant parameter semantics beyond what's in the schema, though it does provide context about the 'product' parameter through usage examples ('I need a factory for cotton t-shirts in Guangdong') and fallback guidance about broadening product terms.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Smart supplier recommendation based on sourcing requirements.' It distinguishes itself from siblings by explicitly contrasting with search_suppliers (which filters by exact criteria) and find_alternatives (which starts from a known supplier). The description specifies it ranks by fit rather than just filtering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with a 'USE WHEN' section listing three specific scenarios, plus a 'WORKFLOW' section positioning it as the 'standalone entry point' for finding suppliers. It explicitly names when to use this tool versus search_suppliers and find_alternatives, and suggests follow-up actions with get_supplier_detail or compare_suppliers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_clustersSearch Industrial ClustersARead-onlyIdempotentInspect
Search Chinese apparel industrial clusters and textile markets.
USE WHEN user asks:
"where is China's [denim / suit / women's wear / underwear] manufacturing concentrated"
"what is the largest [silk / cashmere / down jacket] industrial cluster in China"
"industrial cluster comparison Humen vs Shaoxing vs Haining vs Zhili"
"recommend an industrial cluster for sourcing [product]"
"where should I set up a sourcing office for [category]"
"list mega clusters for [category]"
"fabric markets in Zhejiang / Jiangsu"
"accessories / trim / zipper / button markets in China"
"which province dominates [category] exports"
"follow-up: 'tell me more about Humen's cluster scale'"
"服装产业带 / 面料市场 / 产业集群 / 纺织集群 / 辅料市场"
"做 [品类] 应该去哪个产业带 / 集群推荐"
Famous clusters this database covers include: Humen (Guangdong, womenswear), Shaoxing Keqiao (Zhejiang, fabric mega-market), Haining (Zhejiang, leather), Zhili (Zhejiang, children's wear), Shengze (Jiangsu, silk), Shantou (Guangdong, underwear), Puning (Guangdong, jeans), Jinjiang (Fujian, sportswear), and more.
Returns paginated cluster list with name, location, specialization, scale, supplier count, average rent and labor cost, and key advantages/risks.
WORKFLOW: Cluster discovery entry point. search_clusters → compare_clusters (side-by-side up to 10 cluster_ids) OR get_cluster_suppliers (list factories in that cluster) OR analyze_market (broader market view). RETURNS: { has_more: boolean, data: [{ cluster_id, name_cn, name_en, type, province, city, specialization, scale, supplier_count, labor_cost_avg_rmb }] }
EXAMPLES: • User: "Where are the biggest denim clusters in China?" → search_clusters({ specialization: "denim", scale: "mega" }) • User: "Show me fabric markets in Zhejiang" → search_clusters({ province: "Zhejiang", type: "fabric_market" }) • User: "童装产业带有哪些" → search_clusters({ specialization: "童装" })
ERRORS & SELF-CORRECTION: • Empty data array → try in order: (1) drop scale filter, (2) broaden specialization (e.g. "服装" instead of "牛仔"), (3) remove type, (4) remove province. • Specialization mismatch → both Chinese and English work. Synonyms: sportswear/运动服, womenswear/女装, underwear/内衣, denim/牛仔. • Rate limit 429 → wait 60 seconds; do not retry immediately. • Empty after 3 retries → tell user: "No clusters match [criteria]. Try broader specialization or removing filters."
AVOID: Do not use this for specific factory search — use search_suppliers. Do not compare clusters by calling search_clusters twice — use compare_clusters with cluster_ids.
NOTE: Source: MRC Data (meacheal.ai). 170+ clusters mapped across 31 provinces.
中文:搜索中国服装产业带和面料市场。
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Cluster type: fabric_market (面料市场) / garment_manufacturing (服装制造) / accessories (辅料) / integrated (综合) | |
| limit | No | Page size: number of records to return (1-50, default 10) | |
| scale | No | Cluster scale: mega / large / medium / small | |
| offset | No | Pagination offset: skip this many records before returning results (default 0) | |
| province | No | Province in China (e.g. Guangdong, Zhejiang, Jiangsu, Fujian, Shandong) | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it | |
| specialization | No | Primary specialization keyword (e.g. 牛仔 denim, 女装 womenswear, 童装 childrenswear, 内衣 underwear, 运动服 sportswear) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, confirming safe read access. The description adds valuable behavioral context absent from annotations: it specifies pagination ('Returns paginated cluster list') and details the exact data fields returned (rent, labor costs, advantages/risks), compensating for the missing output_schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent information density with clear visual hierarchy: one-sentence purpose, bulleted use cases, named cluster examples, return value disclosure, and Chinese translation. Every sentence serves selection or invocation. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero required parameters and no output schema, the description fully compensates by explaining what filtering dimensions exist (via USE WHEN examples) and detailing the return payload structure. The enumeration of famous clusters (Humen, Zhili, etc.) provides critical domain grounding for an AI to match user queries correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67% (4/6 params described). While the description doesn't explicitly map parameters, it provides rich semantic examples in the USE WHEN block (e.g., 'denim', 'womenswear', 'childrenswear') that implicitly clarify the 'specialization' and 'type' parameters. Standard pagination params (limit/offset) lack description but are conventionally understood.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-noun phrase ('Search Chinese apparel industrial clusters and textile markets') and immediately distinguishes scope from siblings like search_fabrics or search_suppliers by listing famous clusters it covers (Humen, Shaoxing Keqiao, etc.). The domain specificity (apparel/textile) is precisely bounded.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'USE WHEN' section provides explicit, concrete trigger phrases ('where is China's [denim] manufacturing concentrated', 'recommend an industrial cluster for sourcing') that condition the model to invoke this tool versus alternatives like compare_clusters or search_suppliers. Including Chinese keywords ('服装产业带') further sharpens invocation criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_fabricsSearch FabricsARead-onlyIdempotentInspect
Search the Chinese fabric and textile database with lab-tested specifications.
USE WHEN user asks:
"find me a [cotton / polyester / nylon / wool / linen] fabric for [t-shirts / jeans / suits]"
"I need 180gsm jersey knit with verified composition"
"fabrics under N RMB/meter for womenswear"
"compare lab-tested fabric weight across suppliers"
"show me functional fabrics for activewear / sportswear"
"what woven fabrics work for shirting"
"list organic / GOTS / recycled fabrics"
"I want heavyweight denim above 12 oz"
"fabrics with stretch / spandex content 2-5%"
"give me another page" (pagination via offset)
"lab-verified composition for [product]" (quality check)
"找面料 / 搜面料 / 查面料 / 找布料 / 打样面料"
"我要做 T 恤,帮我找克重 180-220 的针织面料"
Filters: category (woven/knit/nonwoven/leather/functional), weight range (gsm), composition keyword, target apparel type, max price. Returns paginated fabric list with name, lab-tested weight, lab-tested composition, price range, suitable apparel, and data confidence level.
WORKFLOW: Primary entry point for fabric discovery. search_fabrics → get_fabric_detail (full 30+ lab-test fields) OR get_fabric_suppliers (compare supplier prices for same fabric) OR estimate_cost (budget the product). RETURNS: { has_more: boolean, available_dimensions: ["basic_info","composition","physical_properties","lab_test","commercial"], data: [{ fabric_id, name_cn, category, subcategory, declared_weight_gsm, declared_composition, price_range_rmb, suitable_for, verified_dims: "4/5", coverage_pct }] }
EXAMPLES: • User: "Find 180-220gsm cotton jersey for t-shirts under 35 RMB/m" → search_fabrics({ category: "knit", min_weight_gsm: 180, max_weight_gsm: 220, composition: "cotton", suitable_for: "t-shirt", max_price_rmb: 35 }) • User: "I need stretch denim for women's jeans" → search_fabrics({ category: "woven", composition: "spandex", suitable_for: "denim" }) • User: "帮我找适合做衬衫的梭织面料,棉 60% 以上" → search_fabrics({ category: "woven", composition: "cotton", suitable_for: "shirt" })
ERRORS & SELF-CORRECTION: • Empty data array → try in order: (1) drop suitable_for, (2) widen weight range by 50gsm each side, (3) broaden composition (e.g. "cotton" instead of "organic cotton"), (4) drop max_price_rmb, (5) try the parent category (knit → all). • Composition mismatch → TYPO_MAP normalizes common misspellings (e.g. "poly" → "polyester", "lycra" → "spandex"). If still no match, try the Chinese term (棉/涤纶/氨纶/锦纶). • Rate limit 429 → wait 60 seconds. Do not retry immediately. • Empty after 3 retries → tell user: "No fabric matches [criteria]. Would you like to broaden weight/price/composition?"
AVOID: Do not call this looking for a specific named fabric SKU — search by specs instead (weight + composition + category). Do not fetch full lab-test data this way — use get_fabric_detail. Do not call repeatedly for supplier pricing on the same fabric — use get_fabric_suppliers.
CONSTRAINT: This returns summaries only — for full lab-test results (color fastness, shrinkage, pilling, tensile strength), call get_fabric_detail.
NOTE: Source: MRC Data (meacheal.ai). Every record includes AATCC / ISO / GB lab test measurements where verified.
中文:搜索面料数据库,按品类、克重、成分、适用品类、价格筛选。每条均含 AATCC / ISO / GB 方法的实测数据。
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Page size: number of records to return (1-50, default 10) | |
| offset | No | Pagination offset: skip this many records before returning results (default 0) | |
| category | No | Fabric category: woven (梭织) / knit (针织) / nonwoven (无纺) / leather (皮革) / fur (毛皮) / functional (功能性) | |
| composition | No | Fiber composition keyword (e.g. cotton, polyester, spandex, nylon, wool, linen, 棉, 涤纶) | |
| suitable_for | No | Target apparel keyword (e.g. T恤 t-shirt, 衬衫 shirt, 牛仔 denim, 连衣裙 dress) | |
| max_price_rmb | No | Maximum price in RMB per meter | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it | |
| max_weight_gsm | No | Maximum fabric weight in grams per square meter | |
| min_weight_gsm | No | Minimum fabric weight in grams per square meter |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true; description adds valuable behavioral context including pagination ('Returns paginated fabric list'), data quality features ('declared+tested' comparison), and confidence levels that help the agent understand the unique lab-verified dataset.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with distinct sections (purpose, USE WHEN, filters, returns, Chinese translation). Every section earns its place; bilingual support is appropriate for a Chinese database tool. Slightly dense but information-rich without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking output schema, comprehensively describes return structure (name, tested weight, price range, confidence level). Implicitly signals all parameters are optional via 'Filters' language, though explicit 'all filters optional' statement would strengthen it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 75% with clear descriptions. The description adds semantic grouping ('Filters: category...weight range...') that maps parameters to user intents (e.g., 'target apparel type' for suitable_for parameter), though it omits explicit mention of pagination controls (limit/offset).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb+resource ('Search the Chinese fabric and textile database') and distinguishes from siblings like search_suppliers and search_clusters by emphasizing 'lab-tested specifications' and filterable fabric attributes (category, weight, composition).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent 'USE WHEN' section with 5 concrete example queries including Chinese commands (找面料), clearly signaling appropriate triggers. However, lacks explicit redirection to siblings (e.g., 'use search_suppliers for vendor queries') when this tool is inappropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_suppliersSearch SuppliersARead-onlyIdempotentInspect
Search verified Chinese apparel manufacturers, apparel factories, and clothing suppliers.
USE WHEN user asks:
"find me a clothing manufacturer in China / Guangdong / Zhejiang"
"who makes [t-shirts / suits / denim / activewear] in China"
"I need a BSCI / OEKO-TEX certified apparel factory"
"looking for OEM / ODM apparel supplier with MOQ < N"
"find factories with production capacity > N pieces/month"
"list factories that export to the US / EU / Japan"
"show me trading companies in Yiwu / Shenzhen / Shanghai"
"which suppliers in [province] make [product]" (follow-up drill-down)
"give me another page of suppliers" (pagination via offset)
"who can produce knit tops under 300 MOQ"
"search by company name 新鑫 / Xinxin / Texhong"
"find workshop-scale suppliers for small batch sampling"
"搜供应商 / 找服装厂 / 找制衣厂 / 找代工厂 / 找外贸公司"
"帮我在[省份]找[品类]工厂,产能至少 N 件/月"
Filters: province, city, factory type (factory/trading_company/workshop), product category, minimum monthly capacity, compliance status, quality score. Returns paginated supplier list with company name, location, monthly capacity (lab-verified), compliance, quality score.
WORKFLOW: Primary entry point for supplier discovery. search_suppliers → get_supplier_detail (for full 60+ field profile) OR compare_suppliers (side-by-side for up to 10 IDs) OR find_alternatives (diversify the pool) OR check_compliance (verify export readiness) OR get_supplier_fabrics (see their fabric catalog). RETURNS: { has_more: boolean, available_dimensions: string[], data: [{ supplier_id, company_name_cn, company_name_en, type, province, city, product_types, quality_score, verified_dims: "5/8", coverage_pct }] }
EXAMPLES: • User: "Find BSCI-certified denim factories in Guangdong with MOQ under 500" → search_suppliers({ province: "Guangdong", product_type: "denim", compliance_status: "compliant", limit: 10 }) • User: "Who makes activewear for Lululemon in China?" → search_suppliers({ product_type: "activewear" }) — then filter results by client brand in get_supplier_detail • User: "我要在浙江找做牛仔的工厂,产能大于 10 万件" → search_suppliers({ province: "Zhejiang", product_type: "denim", min_capacity: 100000 }) • User: "Show me the next 10 trading companies in Yiwu" → search_suppliers({ city: "Yiwu", type: "trading_company", limit: 10, offset: 10 })
ERRORS & SELF-CORRECTION: • Empty data array → try these in order: (1) remove min_capacity filter, (2) drop city but keep province, (3) broaden product_type to parent category (e.g. "denim" → "bottoms"), (4) drop compliance_status, (5) try recommend_suppliers for ranked fit. • "Invalid province" → use English (Guangdong) or standard Chinese (广东). Supported: 31 mainland provinces + HK/Macau. • product_type returns 0 → the TYPO_MAP normalizes common variants; try synonyms ("tee" → "t-shirt", "jeans" → "denim", "运动服" → "activewear"). • Rate limit 429 → wait 60 seconds. Do not retry immediately. • Empty after 3 retries → tell user: "I couldn't find suppliers matching [criteria]. Would you like me to broaden the search?"
AVOID: Do not call this tool in a loop across provinces — call get_province_distribution first to see where supply is concentrated. Do not use this for ranked "best fit" recommendations — use recommend_suppliers. Do not fetch details by looping — use compare_suppliers with up to 10 IDs.
NOTE: Use this for FILTERING by exact criteria. For ranked recommendations based on sourcing needs, use recommend_suppliers instead. Source: MRC Data (meacheal.ai).
中文:搜索经过核查的中国服装供应商档案,按地区、类型、产能、品类、合规状态等筛选。
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | City name | |
| type | No | Supplier type | |
| limit | No | Page size: number of records to return (1-50, default 10) | |
| query | No | Search by company name — Chinese (广州新鑫) or English (Xinxin Garments) | |
| offset | No | Pagination offset: skip this many records before returning results (default 0) | |
| province | No | Province in China (e.g. 广东 Guangdong, 浙江 Zhejiang, 江苏 Jiangsu, 福建 Fujian, 山东 Shandong) | |
| min_capacity | No | Minimum monthly production capacity (pieces) | |
| product_type | No | Product category keyword (e.g. 西装 suits, 女装 womenswear, 牛仔 denim, 运动服 activewear, t-shirt, 衬衫 shirts) | |
| verbose_hints | No | If true, response includes _interpretation annotations explaining what the data means and _guidance on how to use it | |
| data_confidence | No | Data quality filter: verified / partially_verified / unverified | |
| compliance_status | No | Compliance status filter: compliant / partially_compliant / non_compliant | |
| min_quality_score | No | Minimum quality score 1-10 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true. Description adds valuable context beyond annotations: discloses pagination ('Returns paginated supplier list'), specifies return fields (company name, location, declared+verified capacity), and notes data verification levels. Could be improved with rate limit or auth context, but solid for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose, clear 'USE WHEN' bullet section, filter summary, and return value description. Chinese translation adds value without clutter. Slightly verbose but earns its length through specific example coverage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking output schema, description comprehensively compensates by detailing return structure (pagination, specific fields). Covers all 10 parameters via filter listing, explains verification/data confidence concepts, and addresses both English and Chinese use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 80%, establishing baseline 3. Description lists available filters ('province, city, factory type...') but largely repeats schema descriptions without adding syntax details, business logic, or parameter relationships. Schema already carries descriptive load effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Search' targeting clear resource ('verified Chinese apparel manufacturers, apparel factories, and clothing suppliers'). Effectively distinguishes from siblings like 'search_fabrics' and 'get_fabric_suppliers' by emphasizing apparel/clothing scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent 'USE WHEN' section with six specific query patterns covering location, product types, certifications, and Chinese language queries. Provides concrete trigger phrases for the LLM. Lacks explicit 'when not to use' or named sibling alternatives (e.g., when to use get_supplier_detail instead), preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!