Skip to main content
Glama

Server Details

Search and get fashion products recommendations across multiple e-ecom stores

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
vistoya/vistoya-mcp
GitHub Stars
0
Server Listing
Vistoya

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: brand discovery, product discovery, similarity, filters, detail retrieval, and rendering. No overlaps are evident from the descriptions.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case (e.g., discover_brands, get_product). The verb choices are logical and uniform.

Tool Count5/5

7 tools is well-scoped for a fashion product search and discovery server. It covers the essential operations without being excessive or insufficient.

Completeness5/5

The tool set provides complete coverage for the domain: brand search, product search, similarity recommendations, filter retrieval, detailed product info, and visual rendering. No obvious gaps exist for a discovery-oriented API.

Available Tools

6 tools
discover_brandsA
Read-only
Inspect

Find fashion brands using natural language, structured filters, or both. Best for queries like "Italian streetwear brands", "minimalist Scandinavian brands", "Japanese technical outerwear", or "brands with avant-garde tailoring". query is optional — provide a query, structured filters, or both. Brand country/shipping signals are best-effort and separate from product availability.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of brands to return (1-20, default 8).
queryNoNatural language brand search query, e.g. "Italian streetwear brands", "minimalist Scandinavian brands", or "brands like Rick Owens". Optional: when omitted, results are filtered by the structured fields below. Provide either a query, structured filters, or both.
styleNoOptional style filter, e.g. streetwear, minimalist, elegant, avant-garde, techwear.
price_tierNoOptional brand price-tier focus filter.
gender_focusNoOptional brand audience focus filters.
category_focusNoOptional brand category focus filters, e.g. ["clothing", "shoes"].
ships_from_countryNoOptional ISO-3166 alpha-2 country filter for best-effort store shipping origin, e.g. "IT", "US", "GB". This is not the same as brand origin.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations include readOnlyHint=true and openWorldHint=false, indicating behavior. Description adds that country/shipping signals are best-effort and separate from product availability, which is valuable context for result interpretation. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load purpose and provide actionable examples. Every sentence adds value without repetition or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Description adequately covers purpose, usage, and behavioral caveats. Given schema covers parameters fully and no output schema exists, description could optionally elaborate on return structure, but current level is sufficient for competent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and descriptions in schema are detailed. Description does not add additional parameter details beyond the schema, but examples provided in description align with parameters. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool finds fashion brands using natural language, with specific examples that distinguish it from sibling tools like discover_products (which likely finds products, not brands). It specifies the input type (natural language queries) and typical use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly provides context for when to use (natural language queries for categories/style/country) and gives example queries that work well. It also warns that brand country/shipping signals are best-effort and separate from product availability, helping set expectations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_productsA
Read-only
Inspect

Find fashion products using natural language and/or structured filters. Provide a query for semantic ranking via multimodal text+image embeddings ("breathable linen dress for a beach wedding under $200", "minimalist gold jewelry", "sustainable streetwear") — best for open-ended discovery. Provide only structured filters (category, brand, colors, gender, price, etc.) for pure browse — results are recency-ranked and paginate cleanly. Combine both for filtered semantic search. At least one of query or a filter must be provided. Returns compact product cards: AI-generated summary, price, images, tags, and compact availability by color/size; variant price differences are nested under the availability dimension that determines price. For merchant description, store info, SKU-level variants, exact variant prices, and all product images, call get_product with a product ID from these results. Multi-currency prices supported (e.g. "under 200 zł" or min_price=200 + currency="PLN"); returned prices render in the requested currency when provided.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
brandNoFilter by brand name
limitNoPage size (1-10, default 10)
queryNoNatural language search query — be descriptive for best results. Can include price with currency symbols (e.g. "white coat under 200 zł") which will be parsed automatically. Optional: when omitted, results are filtered by the structured fields below and ranked by recency. Provide either a query, structured filters, or both.
styleNoStyle filter (e.g. minimalist, streetwear, elegant, y2k, techwear)
colorsNoFilter by lowercase colors: e.g. ["black", "navy", "sage green"]
genderNoGender filter
seasonNoSeason filter
stylesNoMulti-style filter (OR). A product matches if any of its `styles` values is in this list. Use instead of `style` to span a related set, e.g. ["basics","minimalist","preppy","sportswear"] for a "Casual" bucket. Combined with `style` if both are provided.
patternNoPattern filter (e.g. solid, stripe, checked, floral)
sleevesNoSleeve style filter
categoryNoCategory slug, e.g. "clothing", "clothing/jackets", "clothing/jackets/bomber-jackets". Accepts last-segment shortcuts when unambiguous — e.g. "loafers-and-slip-ons" resolves the same as "shoes/loafers-and-slip-ons", and "bomber-jackets" resolves the same as "clothing/jackets/bomber-jackets".
currencyNoISO 4217 currency code for min_price/max_price (e.g. "PLN", "EUR", "GBP"). Prices are converted to USD for filtering. Omit for USD.
necklineNoNeckline filter
occasionNoOccasion filter
materialsNoFilter by lowercase materials: e.g. ["cotton", "silk", "leather"]
max_priceNoMaximum price (in the currency specified by "currency" param, or USD if omitted)
min_priceNoMinimum price (in the currency specified by "currency" param, or USD if omitted)
silhouetteNoSilhouette/fit filter (e.g. fitted, slim, regular, relaxed, oversized)
color_matchNoColor matching mode. "any" (default): product has at least one of the queried colors. "exact": product has at least one image where the ONLY colors are the queried colors — use for mono-color searches like "all black".any
store_domainNoFilter by store domain (e.g. "thereformation.com").
exclude_colorsNoExclude products with these colors: e.g. ["white", "beige"]
is_sustainableNoTrue when the user explicitly wants products with sustainability claims.
available_sizesNoFilter by available size labels, e.g. ["s", "m", "38"].
exclude_materialsNoExclude products with these materials: e.g. ["polyester", "nylon"]
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and openWorldHint=false, indicating a safe read operation with limited scope. The description adds valuable context: it explains the AI-powered semantic search mechanism, multi-currency handling with automatic conversion to USD, and that results show original store currency. This goes beyond annotations, though it doesn't detail rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key features and examples. Every sentence adds value: it explains the search method, provides usage examples, lists supported filters, and details currency handling. No wasted words, and it's structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (20 parameters, no output schema), the description does well by covering the AI search approach, multi-currency logic, and result formatting. It compensates for the lack of output schema by stating results show original store currency. However, it could mention pagination or result limits more explicitly, though the schema covers limit parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 20 parameters. The description adds some context by mentioning fashion-specific filters and multi-currency support, which aligns with parameters like colors, materials, and currency. However, it doesn't provide significant additional semantics beyond what the schema already specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find fashion products using natural language' with AI-powered semantic search. It specifies the resource (fashion products) and verb (find), and distinguishes from siblings by emphasizing natural language queries versus more structured alternatives like find_similar or get_product.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Best for descriptive queries' with examples, and mentions fashion-specific filters. It implicitly contrasts with siblings by highlighting natural language search, suggesting when to use this tool over more structured or specific sibling tools like get_filters or list_stores.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_similar_brandsA
Read-only
Inspect

Given a brand name, find similar brands using brand-profile vectors generated during product indexing. Returns up to 10 brands.

ParametersJSON Schema
NameRequiredDescriptionDefault
brandYesBrand name (case-insensitive), e.g. "Rick Owens".
limitNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint: true and openWorldHint: false, indicating safe read-only, deterministic results. The description adds value by explaining the underlying mechanism (brand-profile vectors from product indexing) and the fixed return limit of 10, which is not in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with the core purpose, no wasted words. Every sentence adds essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (2 parameters, no output schema, simple return constraint), the description covers the key aspects: input type, similarity method, and result count. No additional return value explanation needed since there is no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, with the 'brand' parameter described well ('Brand name or brandKey, e.g. "Rick Owens" or "rick owens".') and 'limit' having clear range via schema. The description reinforces that the tool returns up to 10 brands, aligning with the limit default and max. However, the description does not explain the limit parameter's role.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'find similar brands' using 'brand-profile vectors', specifying the input (brand name or brandKey) and a concrete output (up to 10 brands). It distinguishes from siblings like 'find_similar' and 'discover_brands' by mentioning the specific vector-based similarity mechanism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for finding similar brands given a brand reference, but does not explicitly state when to use this tool versus alternatives like 'find_similar' or 'discover_brands'. No when-not-to-use or exclusion criteria are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_similar_productsA
Read-only
Inspect

Given a product ID, find similar products across the entire catalog. Useful for "more like this" recommendations or finding alternatives. Returns compact product cards, not full variant detail; call get_product for SKU-level variants, exact variant prices, merchant description, store info, and all images. Returns page and hasNextPage. Returns up to 10 results per page, paginated (max 3 pages).

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (1-3)
limitNoPage size (1-10)
currencyNoISO 4217 currency code to render prices in (e.g. "GBP", "EUR", "USD"). Defaults to USD. Stored native prices are preferred; falls back to FX conversion when a merchant-set price isn't available.
product_idYesThe product ID (from a previous search result)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, confirming non-destructive read. The description adds behavioral details: returns compact product cards, includes page/hasNextPage, limited to 10 results per page (max 3 pages), and explains currency fallback logic. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (3 sentences) and front-loads the purpose. However, it packs multiple pieces of information into a single paragraph without clear separation. It is still efficient and readable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters, no output schema, but clear annotations, the description provides sufficient context: return format (compact cards), pagination details (page/hasNextPage, max 3 pages, up to 10 per page), and a pointer to get_product for full details. It covers all essential behavioral aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters, so baseline is 3. The description adds value by explaining the currency parameter's behavior (ISO code, default USD, fallback to FX conversion), which goes beyond the schema description. Other parameters are adequately described in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action: 'Given a product ID, find similar products across the entire catalog.' It specifies the verb 'find' and the resource 'similar products', and distinguishes from sibling tool 'get_product' by noting it returns compact cards versus full variant detail.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('for 'more like this' recommendations or finding alternatives') and when not to ('call get_product for SKU-level variants...'), providing clear guidance on alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_filtersA
Read-only
Inspect

Returns available filter values in the catalog. By default returns categoryTree plus brands, colors, materials, genders, occasions, seasons, styles, silhouettes, currencies, and price range. Use "fields" to request only specific dimensions — faster and less data. "categoryTree" is a flat DFS-ordered list of { value, label } entries; hierarchy is encoded in the value slug (e.g. "clothing/jackets/bomber-jackets"), parents appear before descendants, and every value can be passed directly to discover_products.category. Use "brand_search" to search brands by prefix instead of listing all. Pass "gender" to scope categoryTree to that gender (women/men/girls/boys); omit to see the merged union.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoWhich filter dimensions to return. Omit for all. Example: ["categoryTree", "colors", "priceRange"]
genderNoScope categoryTree to this gender. Omit to return the merged union across women/men/girls/boys.
brand_pageNoPage number for brands (12 per page). Use with or without brand_search.
brand_searchNoSearch brands by name (case-insensitive, prefix matches first). Only affects the brands field.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and openWorldHint=false, which the description doesn't contradict. The description adds valuable behavioral context beyond annotations: it explains performance implications ('faster and less data'), scoping behavior for 'category' and 'brand_search', and clarifies that 'brand_search' uses prefix matching. However, it doesn't mention pagination details for 'brand_page' or potential rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: the first sentence states the core purpose, followed by three concise sentences explaining key parameter usage. Each sentence adds clear value without redundancy, and the information is front-loaded with the most important details first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema), the description is mostly complete. It covers the tool's purpose, key usage scenarios, and behavioral nuances. However, it doesn't describe the return format or structure of filter values, which would be helpful since there's no output schema. The annotations provide safety information but not output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some semantic context: it explains that 'fields' reduces data volume, 'category' scopes subcategories, and 'brand_search' uses prefix matching. This provides marginal value beyond the schema but doesn't significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Returns') and resource ('available filter values in the catalog'), and distinguishes it from siblings by focusing on filter metadata rather than product data or store information. It explicitly lists the dimensions returned, making the scope unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool: 'By default returns all dimensions' and 'Use "fields" to request only specific dimensions — faster and less data.' It also explains when to use specific parameters like 'category' and 'brand_search' for scoping or searching, offering clear alternatives within the tool itself.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_productA
Read-only
Inspect

Get the detailed response for a specific product ID. Use this after discover_products or find_similar_products when you need merchant description, store info, all images, SKU-level availability variants, SKU, colorKey/size matrix, exact variant prices/compareAtPrice in the requested currency, and the direct link to purchase.

ParametersJSON Schema
NameRequiredDescriptionDefault
currencyNoISO 4217 currency code to render prices in (e.g. "GBP", "EUR", "USD"). Defaults to USD.
product_idYesThe product ID (from a previous search result)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and closed-world behavior. The description adds value by specifying what details are included (variants, images, AI classification, purchase link), but does not disclose additional traits like rate limits, authentication needs, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and lists included details without unnecessary words, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with one well-documented parameter and no output schema, the description is mostly complete. It could improve by clarifying the return format or error cases, but it adequately covers the tool's purpose and scope given the annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameter 'product_id' is fully documented in the schema. The description adds no additional parameter details beyond implying it comes from a previous search, which aligns with the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('full details of a specific product by ID'), and distinguishes from siblings by specifying it retrieves detailed information for a single product rather than searching or listing (e.g., 'discover_products' for broader searches).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when detailed information on a specific product is needed, and the parameter description suggests it follows a search result. However, it does not explicitly state when NOT to use it or name alternatives like 'discover_products' for initial searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.