TrustRails
Server Details
Search and compare 26,000+ UK electronics products across multiple retailers including AO. Get real-time prices, stock availability, and price comparison across retailers in a single search. Covers laptops, phones, tablets, headphones, TVs, monitors, cameras, gaming, and more.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.7/5 across 2 of 2 tools scored.
The two tools serve distinctly different purposes: search_products handles discovery and filtering across the catalog, while get_product retrieves detailed specifications for a specific item. No functional overlap exists.
Both tools follow an identical snake_case verb_noun pattern (search_products, get_product) with clear, action-oriented verbs that consistently describe their operations.
At 2 tools, the set is borderline thin for an electronics comparison domain. While it covers basic search and retrieval, it lacks supporting operations like spec-based filtering, comparison helpers, or category browsing that would typically accompany such a service.
Basic read operations are present, but notable gaps exist for the electronics domain: no server-side filtering by technical specifications (e.g., RAM, storage) forces inefficient client-side filtering, and there are no tools for price tracking, alerts, or direct comparison operations.
Available Tools
2 toolsget_productAInspect
Get full details for a single product by ID. Returns complete technical specifications including specs.description (full prose spec text with processor, RAM, storage, display, ports etc), pricing, stock level, delivery time, and all retailer offers with per-retailer pricing. Accepts both canonical product IDs and original retailer offer IDs. Use this after search_products to get detailed specs for comparison or recommendations. Always call this when a user needs precise product attributes, compatibility info, side-by-side comparisons, or price comparison across retailers.
| Name | Required | Description | Default |
|---|---|---|---|
| product_id | Yes | The unique product ID from search results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It details return structure (technical specs, pricing, stock, delivery, offers) and input flexibility (accepts canonical IDs and retailer offer IDs). Omits error handling or edge cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: purpose, return value details, input constraints, and usage guidelines. Every clause provides distinct information. Well front-loaded with core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a single-parameter read tool: explains purpose, return payload structure, input ID types, and workflow integration with search_products. Lacks only error condition disclosure to be complete given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema has 100% coverage describing product_id as 'unique product ID from search results', description adds crucial semantic detail that it also accepts 'original retailer offer IDs', clarifying input flexibility beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Get full details), resource (product), and scope (by ID). Clearly distinguishes from sibling search_products by contrasting single-item retrieval vs search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'after search_products' and 'when a user needs precise product attributes, compatibility info, side-by-side comparisons, or price comparison.' Names sibling alternative and defines precise use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_productsAInspect
Search 26,000+ deduplicated UK electronics products across multiple retailers with price comparison. Returns summary data: title, brand, price, availability, category, purchase link, and offer_count. When offer_count > 1, the product is available from multiple retailers — call get_product to see all offers. Specs are minimal — for full technical specifications, call get_product with the product ID. Covers: Laptops, Desktops, Phones, Tablets, Headphones, Monitors, TVs, Cameras, Keyboards, Mice, Speakers, Gaming, Wearables, Printers, Networking, Storage, Audio, Drones, Cables & Chargers. All prices in GBP. IMPORTANT RULES: 1) Decompose the user's request: extract brand → brand filter, category → category filter, price → price filters. What remains is the query. Example: 'Sony headphones under £200' → brand='Sony', category='Headphones', max_price=200, query omitted. Example: 'MacBook Neo' → brand='Apple', category='Laptops', query='neo'. Example: 'Samsung QLED TV' → brand='Samsung', category='TVs', query='qled'. Example: 'Sony WH-1000XM5' → brand='Sony', category='Headphones', query='WH-1000XM5'. 2) DO NOT put brand names or prices in the query — use filters. DO put model lines, series names, and variants in the query (e.g. 'neo', 'ultra', 'slim', 'oled', 'qled', model numbers). 3) If brand + category alone fully describe what the user wants, omit the query entirely — fewer query words gives cleaner results. Only add query words when they meaningfully narrow down within that brand/category. 4) Always set lite=true to reduce payload size. 5) If 0 results, try a shorter/broader query or drop filters. 6) Use get_product for full specs — do not rely on search results for detailed attributes. AI USAGE PROTOCOL: For simple browsing, search with lite=true is sufficient. For spec-based queries (wattage, ports, RAM, screen size, weight, etc.), ALWAYS search first, then call get_product on the top 3-5 results and validate constraints against the full specs before recommending. Do not assume technical specs from titles alone. If specs are missing, state that explicitly. STOCK AVAILABILITY: When a product is availability: out_of_stock, do not recommend it as a purchase. Instead mention it as a notable alternative — especially if it offers a meaningful price advantage — and suggest the user check back. Never silently omit out-of-stock results; surface them transparently.
| Name | Required | Description | Default |
|---|---|---|---|
| lite | No | Return trimmed product objects with only essential fields (id, title, brand, price, currency, availability, image_url, purchase_url, offer_count). Always set to true unless full product objects are needed. | |
| sort | No | Sort order: 'relevance' (default), 'price_asc' (cheapest first), 'price_desc' (most expensive first). | |
| brand | No | Filter by brand name (exact match, case-insensitive). Examples: Apple, Samsung, Sony, HP, Dell, Lenovo, Anker, Bose, LG | |
| limit | No | Maximum number of products to return (default 50, max 100) | |
| query | No | The refinement terms after brand and category are extracted. Use for model lines, series names, variants, or model numbers (e.g. 'neo', 'ultra', 'oled', 'WH-1000XM5'). DO NOT include brand names or prices — use filters. Omit entirely if brand + category fully describe what the user wants. | |
| category | No | Filter by product category. Use ONLY these exact values: Laptops, Desktops, Tablets, Phones, TVs, Monitors, Headphones, Speakers, Cameras, Keyboards, Mice, Printers, Networking, Storage, Gaming, Wearables, Drones, Audio, Cables & Chargers. NOTE: 'Smartphones' is not valid — use 'Phones'. 'Televisions' is not valid — use 'TVs'. For TVs, use query: 'smart TV' — it returns far more results than 'TV' alone. Avoid query: 'television'. | |
| max_price | No | Maximum price in GBP. | |
| min_price | No | Minimum price in GBP. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It explains deduplication behavior, offer_count semantics (>1 indicates multiple retailers), currency (GBP), coverage scope, lite mode effects (payload size), and stock availability handling (transparency requirements for out_of_stock). Minor gap: no mention of rate limiting, caching, or real-time vs. batched data updates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Lengthy but well-structured with clear sections (summary, coverage list, rules, protocol, stock guidance). Front-loaded with purpose statement. Every section earns its place for an AI agent attempting complex query decomposition, though the category list slightly duplicates schema parameter descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Thoroughly complete for a search tool lacking output schema. Describes all return fields (title, brand, price, availability, etc.), explains partial data limitations vs. get_product, covers edge cases (0 results, out_of_stock), and provides GBP currency context. The parameter interaction logic is fully specified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), the description adds exceptional value through decomposition rules with 4 detailed examples mapping natural language ('Sony headphones under £200') to exact parameter values. It explicitly mandates lite=true, explains the query parameter's unique role (model lines/variants only, never brand/price), and clarifies category exact values and exceptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb and resource ('Search 26,000+ deduplicated UK electronics products'), clarifies scope (price comparison across retailers), and explicitly distinguishes from sibling tool get_product by stating when each should be used ('call get_product to see all offers', 'for full technical specifications, call get_product').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'IMPORTANT RULES' with 6 numbered instructions and 4 concrete examples showing exactly how to decompose user requests into parameters. Includes 'AI USAGE PROTOCOL' section mandating when to chain calls to get_product ('ALWAYS search first, then call get_product on the top 3-5 results'). Also covers error handling ('If 0 results, try a shorter/broader query').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!