Skip to main content
Glama

market

Server Details

Search and get fashion products recommendations across multiple e-ecom stores

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
vistoya/vistoya-mcp
GitHub Stars
0
Server Listing
Vistoya

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 5 of 5 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: discover_products for semantic search, find_similar for recommendations, get_filters for metadata, get_product for detailed info, and list_stores for store overview. The descriptions reinforce unique use cases, making tool selection unambiguous.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (e.g., discover_products, find_similar, get_filters, get_product, list_stores). The verbs are descriptive and aligned with the actions, and snake_case is used uniformly throughout, providing a predictable and readable naming scheme.

Tool Count5/5

With 5 tools, the server is well-scoped for a fashion product catalog domain. Each tool serves a specific function (search, recommendations, metadata, details, store listing), and there are no extraneous or missing tools, making the count appropriate for the intended purpose.

Completeness4/5

The tool set covers core operations for a fashion catalog: discovery, similarity, filtering, product details, and store listing. Minor gaps exist, such as no explicit update or delete tools for catalog management, but these are not essential for a query-focused server, and agents can work effectively with the provided tools.

Available Tools

5 tools
discover_productsA
Read-only
Inspect

Find fashion products using natural language. Uses AI-powered semantic search with vector embeddings. Best for descriptive queries like "breathable linen dress for a beach wedding under $200", "minimalist gold jewelry", or "sustainable streetwear". Supports fashion-specific filters: color, material, gender, occasion, season, availability. Multi-currency: prices can be specified in any currency (e.g. "under 200 zł" or min_price=200 + currency="PLN") — they are automatically converted to USD for filtering. Results always show the original store currency.

ParametersJSON Schema
NameRequiredDescriptionDefault
brandNoFilter by brand name
limitNoMax results (1-30, default 10)
queryYesNatural language search query — be descriptive for best results. Can include price with currency symbols (e.g. "white coat under 200 zł") which will be parsed automatically.
styleNoStyle filter (e.g. minimalist, streetwear, elegant, y2k, techwear)
colorsNoFilter by lowercase colors: e.g. ["black", "navy", "sage green"]
genderNoGender filter
seasonNoSeason filter
patternNoPattern filter (e.g. solid, stripe, checked, floral)
categoryNoProduct category
currencyNoISO 4217 currency code for min_price/max_price (e.g. "PLN", "EUR", "GBP"). Prices are converted to USD for filtering. Omit for USD.
occasionNoOccasion filter
materialsNoFilter by lowercase materials: e.g. ["cotton", "silk", "leather"]
max_priceNoMaximum price (in the currency specified by "currency" param, or USD if omitted)
min_priceNoMinimum price (in the currency specified by "currency" param, or USD if omitted)
silhouetteNoSilhouette/fit filter (e.g. fitted, slim, regular, relaxed, oversized)
color_matchNoColor matching mode. "any" (default): product has at least one of the queried colors. "exact": product has at least one image where the ONLY colors are the queried colors — use for mono-color searches like "all black".any
subcategoryNoLowercase subcategory: e.g. dresses, sneakers, crossbody bags, earrings
store_domainNoFilter by store domain (use list_stores to get domains)
exclude_colorsNoExclude products with these colors: e.g. ["white", "beige"]
exclude_materialsNoExclude products with these materials: e.g. ["polyester", "nylon"]
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and openWorldHint=false, indicating a safe read operation with limited scope. The description adds valuable context: it explains the AI-powered semantic search mechanism, multi-currency handling with automatic conversion to USD, and that results show original store currency. This goes beyond annotations, though it doesn't detail rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key features and examples. Every sentence adds value: it explains the search method, provides usage examples, lists supported filters, and details currency handling. No wasted words, and it's structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (20 parameters, no output schema), the description does well by covering the AI search approach, multi-currency logic, and result formatting. It compensates for the lack of output schema by stating results show original store currency. However, it could mention pagination or result limits more explicitly, though the schema covers limit parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 20 parameters. The description adds some context by mentioning fashion-specific filters and multi-currency support, which aligns with parameters like colors, materials, and currency. However, it doesn't provide significant additional semantics beyond what the schema already specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find fashion products using natural language' with AI-powered semantic search. It specifies the resource (fashion products) and verb (find), and distinguishes from siblings by emphasizing natural language queries versus more structured alternatives like find_similar or get_product.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Best for descriptive queries' with examples, and mentions fashion-specific filters. It implicitly contrasts with siblings by highlighting natural language search, suggesting when to use this tool over more structured or specific sibling tools like get_filters or list_stores.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_similarA
Read-only
Inspect

Given a product ID, find similar products across the entire catalog. Useful for "more like this" recommendations or finding alternatives. Returns up to 10 results per page, paginated (max 3 pages).

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (1-3)
limitNoMax similar products per page (1-10)
product_idYesThe product ID (from a previous search result)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true and openWorldHint=false, but the description adds valuable behavioral context: 'Returns up to 10 results per page, paginated (max 3 pages)'. This discloses pagination behavior and limits that aren't covered by annotations, though it doesn't mention rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that are front-loaded with the core purpose followed by behavioral details. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with good annotations and full schema coverage, the description provides adequate context about purpose, usage, and pagination behavior. The main gap is the lack of output schema, so the description doesn't explain return format, but this is reasonable given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific semantics beyond what's in the schema, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'find' and resource 'similar products' with specific scope 'across the entire catalog'. It distinguishes from siblings like 'get_product' (single product) and 'discover_products' (general discovery) by focusing on similarity-based retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context: 'Useful for "more like this" recommendations or finding alternatives', which indicates when to use this tool. However, it doesn't explicitly state when NOT to use it or mention specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_filtersA
Read-only
Inspect

Returns available filter values in the catalog. By default returns all dimensions (categories, subcategories, brands, colors, materials, genders, occasions, seasons, styles, silhouettes, currencies, price range). Use "fields" to request only specific dimensions — faster and less data. Use "category" to scope subcategories to a specific category (e.g. "footwear" returns only footwear subcategories). Use "brand_search" to search brands by prefix instead of listing all.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoWhich filter dimensions to return. Omit for all. Example: ["colors", "subcategories", "priceRange"]
categoryNoScope subcategories to this category (e.g. "footwear", "apparel"). Only affects the subcategories field.
brand_pageNoPage number for brands (50 per page). Use with or without brand_search.
brand_searchNoSearch brands by name (case-insensitive, prefix matches first). Only affects the brands field.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and openWorldHint=false, which the description doesn't contradict. The description adds valuable behavioral context beyond annotations: it explains performance implications ('faster and less data'), scoping behavior for 'category' and 'brand_search', and clarifies that 'brand_search' uses prefix matching. However, it doesn't mention pagination details for 'brand_page' or potential rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: the first sentence states the core purpose, followed by three concise sentences explaining key parameter usage. Each sentence adds clear value without redundancy, and the information is front-loaded with the most important details first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema), the description is mostly complete. It covers the tool's purpose, key usage scenarios, and behavioral nuances. However, it doesn't describe the return format or structure of filter values, which would be helpful since there's no output schema. The annotations provide safety information but not output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some semantic context: it explains that 'fields' reduces data volume, 'category' scopes subcategories, and 'brand_search' uses prefix matching. This provides marginal value beyond the schema but doesn't significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Returns') and resource ('available filter values in the catalog'), and distinguishes it from siblings by focusing on filter metadata rather than product data or store information. It explicitly lists the dimensions returned, making the scope unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool: 'By default returns all dimensions' and 'Use "fields" to request only specific dimensions — faster and less data.' It also explains when to use specific parameters like 'category' and 'brand_search' for scoping or searching, offering clear alternatives within the tool itself.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_productA
Read-only
Inspect

Get full details of a specific product by ID, including all variants, images, AI classification, and the direct link to purchase.

ParametersJSON Schema
NameRequiredDescriptionDefault
product_idYesThe product ID (from a previous search result)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and closed-world behavior. The description adds value by specifying what details are included (variants, images, AI classification, purchase link), but does not disclose additional traits like rate limits, authentication needs, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and lists included details without unnecessary words, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with one well-documented parameter and no output schema, the description is mostly complete. It could improve by clarifying the return format or error cases, but it adequately covers the tool's purpose and scope given the annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameter 'product_id' is fully documented in the schema. The description adds no additional parameter details beyond implying it comes from a previous search, which aligns with the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('full details of a specific product by ID'), and distinguishes from siblings by specifying it retrieves detailed information for a single product rather than searching or listing (e.g., 'discover_products' for broader searches).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when detailed information on a specific product is needed, and the parameter description suggests it follows a search result. However, it does not explicitly state when NOT to use it or name alternatives like 'discover_products' for initial searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_storesB
Read-only
Inspect

List all connected ecommerce stores in the catalog with their product counts and providers.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and openWorldHint=false, covering safety and scope. The description adds value by specifying what data is returned ('product counts and providers'), which isn't covered by annotations. However, it doesn't disclose other behavioral traits like rate limits, authentication needs, or error handling, resulting in a moderate score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It directly states what the tool does and includes key details like 'product counts and providers', making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, read-only, no output schema), the description is adequate but has gaps. It explains the return data but doesn't cover usage context or behavioral nuances. With annotations providing safety and scope, the description meets minimum viability but lacks depth for optimal agent guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate. A baseline of 4 is applied since the schema fully handles parameters, and the description doesn't need to compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List all connected ecommerce stores in the catalog with their product counts and providers.' It specifies the verb ('List'), resource ('connected ecommerce stores'), and additional details ('product counts and providers'). However, it doesn't explicitly differentiate from sibling tools like 'discover_products' or 'get_product', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention scenarios for usage, prerequisites, or comparisons to sibling tools such as 'discover_products' or 'get_product'. This lack of contextual direction leaves the agent without clear usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.