Skip to main content
Glama

Server Details

AI sales manager for composite swimming pools — recommendations, pricing, BIM/CAD, dealers

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
lagunapools/laguna-pools-mcp
GitHub Stars
0
Server Listing
laguna-pools-mcp

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.7/5 across 15 of 15 tools scored. Lowest: 1.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes targeting specific resources like pricing, accessories, or dealer info, but some overlap exists between 'search_pools' and 'recommend_pool' as both help with pool selection, potentially causing confusion. Overall, descriptions are clear enough to differentiate tools in most cases.

Naming Consistency5/5

Tool names follow a consistent verb_noun pattern throughout, such as 'get_pool_price', 'find_dealer', and 'submit_lead'. All tools use snake_case with clear action-object pairs, making the set predictable and easy to navigate.

Tool Count5/5

With 15 tools, the server is well-scoped for a pool company domain, covering sales, support, and product info without being overwhelming. Each tool serves a clear purpose, such as pricing, specifications, or dealer interactions, justifying its inclusion.

Completeness5/5

The tool set provides comprehensive coverage for the pool sales and support domain, including CRUD-like operations like search, recommend, calculate, and submit, along with detailed product info, accessories, and services. No obvious gaps are present; agents can handle full workflows from selection to purchase.

Available Tools

15 tools
calculate_totalCInspect

Расчёт стоимости комплекта «под ключ»: бассейн + фильтрация + покрывало + бортовой камень + доставка. Калькулятор комплектации

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoГород для расчёта доставки
modelYesМодель: laguna2..laguna9
seriesNoСерия: premium (стандарт) или nord (утеплённая)
include_coverNoДобавить покрывало для бассейна
include_borderNoДобавить бортовой камень
include_filterNoДобавить фильтрацию (рекомендуется)
include_furnitureNoДобавить мебель (лежак, лавочка)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions calculating a 'turnkey' package cost but doesn't disclose behavioral traits such as whether this is a read-only operation, if it requires authentication, rate limits, or what the output format looks like. The description is minimal and lacks critical operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, stating the core purpose in a single sentence. However, it could be more structured by explicitly separating purpose from usage context, and the second sentence 'Calculator for package configuration' is somewhat redundant with the first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no output schema, no annotations), the description is incomplete. It lacks information on output format, error handling, dependencies, or how results are presented. For a calculation tool with multiple inputs, more context is needed to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds no additional parameter semantics beyond implying the tool calculates based on these inputs. Baseline 3 is appropriate when the schema does the heavy lifting, though the description doesn't compensate with extra insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: calculating the total cost for a 'turnkey' pool package including pool, filtration, cover, border stone, and delivery. It specifies the verb 'calculate' and resource 'total cost,' but doesn't explicitly differentiate from siblings like 'get_pool_price' which might provide individual pricing rather than package calculations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description mentions it's a 'calculator for package configuration,' but doesn't specify prerequisites, exclusions, or compare it to sibling tools like 'get_pool_price' or 'recommend_pool' that might serve related purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_dealerCInspect

Найти дилера Laguna Pools в городе. 175 городов России

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoГород (на русском)
regionNoРегион/область (на русском)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states what the tool does but lacks behavioral details: it doesn't disclose whether this is a read-only operation, what data is returned (e.g., dealer contact info, availability), potential rate limits, or authentication needs. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded in a single sentence, efficiently stating the purpose and scope. There's no wasted text, though it could be slightly more structured (e.g., separating scope from action). It earns its place by clearly communicating the tool's intent without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It lacks details on return values (e.g., what dealer information is provided), error handling, or behavioral traits. For a tool with 2 parameters and moderate complexity (dealer lookup across many cities), more context is needed to guide an AI agent effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('city' and 'region') documented in Russian. The description adds no additional parameter semantics beyond the schema, such as format examples or constraints like city-region relationships. Baseline 3 is appropriate since the schema does the heavy lifting, but the description doesn't compensate with extra context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('find') and resource ('dealer Laguna Pools in the city'), specifying it covers 175 cities in Russia. It distinguishes from siblings like 'search_pools' or 'recommend_pool' by focusing on dealer locations rather than pool products or services. However, it doesn't explicitly differentiate from all siblings (e.g., 'get_repair_services' might also involve dealers).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives is provided. The description mentions '175 cities in Russia' which implies geographical scope, but doesn't specify prerequisites, when-not-to-use scenarios, or direct alternatives among siblings like 'search_pools' or 'submit_lead' that might relate to dealer interactions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_accessoriesDInspect

Аксессуары: технические приямки, стенды, покрывала, LagunaBox

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but fails completely. It doesn't indicate whether this is a read operation, what format the data returns, whether authentication is required, or any side effects. The description provides only a list of accessory types without explaining the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief (one phrase) but inefficiently structured. It's a simple list rather than a functional description. While concise, it fails to convey purpose or usage, so the brevity doesn't serve clarity. The Russian language doesn't inherently affect scoring, but the content is under-specified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations, no output schema, and zero parameters, the description is completely inadequate. It doesn't explain what the tool returns, how to interpret the accessory list, or what 'get' means operationally. Given the sibling tools suggest this is part of a pool/spa system, the description fails to provide necessary context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema description coverage, so the baseline is 4. The description doesn't need to explain parameters since none exist, and it appropriately doesn't attempt to describe non-existent parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description lists examples of accessories (technical pits, stands, covers, LagunaBox) but doesn't state what the tool actually does. It's a noun phrase rather than a verb phrase describing an action. The tool name 'get_accessories' suggests retrieval, but the description doesn't confirm this or explain what 'get' means in this context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives like 'get_furniture', 'get_filters', or 'get_slides'. The description offers no context about appropriate use cases, prerequisites, or relationships to sibling tools in the pool-related server.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bim_filesCInspect

BIM/CAD файлы для архитекторов и проектировщиков: DWG, OBJ, RFA, 3DS. Для интеграции бассейна в проект

ParametersJSON Schema
NameRequiredDescriptionDefault
modelNoМодель: laguna3..laguna9, furniture, slide. Пусто = все
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions file types and purpose but doesn't disclose behavioral traits such as whether this is a read-only operation, if it requires authentication, rate limits, or what the output format looks like (e.g., list of files, download links). For a tool with no annotations, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and to the point, consisting of two sentences that state the tool's purpose and file formats. It's front-loaded with key information, though the second sentence could be more precise. There's no wasted text, making it efficient, but it could benefit from slightly more structure to enhance clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete for a tool that likely returns files or data. It doesn't explain what is returned (e.g., file list, metadata), error conditions, or dependencies. For a tool with one parameter and no structured output info, the description should provide more context to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'model' documented as accepting values like 'laguna3..laguna9, furniture, slide' and defaulting to all if empty. The description doesn't add any meaning beyond this, as it doesn't explain parameter usage or constraints. With high schema coverage, the baseline score of 3 is appropriate, as the schema handles the parameter details adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides BIM/CAD files for architects/designers and lists file formats (DWG, OBJ, RFA, 3DS), which gives a general purpose. However, it doesn't specify the exact action (e.g., 'retrieve', 'list', or 'download') and the phrase 'Для интеграции бассейна в проект' ('For integrating a pool into a project') is somewhat vague about scope. It doesn't clearly differentiate from sibling tools like get_furniture or get_slides, which might also provide project files.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. The description mentions integration into a project, but it doesn't specify prerequisites, exclusions, or compare to siblings like get_pool_specs or get_pool_passport that might offer related data. Usage is implied through context but lacks clear directives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_filtersCInspect

Фильтрационные установки LPF для бассейнов (400-900 мм)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description only states what the tool provides (LPF filtration units) without any information on how it behaves—e.g., whether it returns a list, details, or requires authentication. It lacks details on rate limits, error handling, or response format, leaving significant gaps in understanding the tool's operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single phrase, which is concise but under-specified. It lacks structure such as a clear verb or usage context, making it feel incomplete rather than efficiently brief. While it doesn't waste words, it fails to provide essential information that would help an AI agent understand the tool's purpose and behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and no output schema, the description should fully explain what the tool does and returns. However, it only states the subject matter (LPF filtration units) without specifying the action (e.g., list or retrieve) or the return format. This leaves the agent unclear on how to use the tool or interpret its results, making it incomplete for effective invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the input schema has 100% description coverage (though empty). With no parameters, the baseline score is 4, as there is nothing for the description to compensate for. The description does not need to add parameter semantics, and it does not introduce any confusion regarding inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Фильтрационные установки LPF для бассейнов (400-900 мм)' is in Russian and translates to 'LPF filtration units for pools (400-900 mm)'. This states what the tool provides (filtration units) but is vague about the action—it doesn't specify if it lists, retrieves, or details these units. It distinguishes from siblings by focusing on filtration units, but lacks a clear verb like 'get' or 'list', making the purpose somewhat ambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention any context, prerequisites, or exclusions, and with siblings like 'get_pool_specs' or 'search_pools', there is no indication of how this tool fits into the workflow or when it should be chosen over others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_furnitureBInspect

Каталог композитной мебели для бассейнов: лежаки, лавочки, столешницы, табуреты, кашпо

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It implies a read-only operation by listing items, but doesn't specify if it requires authentication, has rate limits, returns structured data, or handles errors. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that lists the resource and key examples without unnecessary words. It's front-loaded with the main purpose and uses a colon to introduce specific items, making it easy to parse. Every part of the sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple retrieval with no parameters) and lack of annotations or output schema, the description is minimally complete. It states what the tool does but lacks details on behavior, output format, or usage context. For a no-param tool, this is adequate but leaves clear gaps in guidance and transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so there are no parameters to document. The description doesn't need to add parameter semantics, and it appropriately doesn't mention any. Baseline 4 is applied as per the rules for tools with no parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to retrieve a catalog of composite furniture for pools, listing specific items like loungers, benches, countertops, stools, and planters. It uses a specific verb ('Каталог' implies listing/retrieving) and resource (composite furniture for pools), though it doesn't explicitly distinguish itself from sibling tools like 'get_accessories' or 'get_slides'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or differentiate from similar tools like 'get_accessories' or 'get_slides', which might also return pool-related items. This leaves the agent with no explicit usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pool_passportCInspect

Паспорт бассейна Laguna Pools — установка, эксплуатация, консервация, уход, водоподготовка, допоборудование. Официальное руководство

ParametersJSON Schema
NameRequiredDescriptionDefault
sectionNoРаздел паспорта: all (весь), about (о бассейнах), delivery (доставка/разгрузка), installation (установка/котлован/подушка/бандаж), drainage (дренаж), equipment (подключение оборудования), water (водоподготовка/химия), care (уход/обслуживание), winter (консервация на зиму), spring (расконсервация весной), tolerances (допуски), accessories (допоборудование)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the document covers specific topics but doesn't disclose behavioral traits like whether this is a read-only operation, if it returns structured data or documents, potential rate limits, or authentication requirements. The description is functional but lacks operational context needed for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that lists key topics, which is reasonably concise. However, it could be more front-loaded by explicitly stating it retrieves documentation. The list of topics is somewhat dense but relevant. It avoids unnecessary repetition but isn't optimally structured for quick scanning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 1 parameter with full schema coverage and no output schema, the description adequately covers the tool's purpose and scope. However, without annotations or output details, it lacks information on return format (e.g., text, PDF, structured data) and behavioral constraints. For a simple retrieval tool, this is minimally viable but leaves gaps in operational understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with detailed enum explanations, so the baseline is 3. The description doesn't add parameter-specific information beyond what's in the schema (e.g., it doesn't clarify default behavior if no section is specified). However, it implicitly supports the schema by listing similar topics (installation, water treatment, etc.).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a pool passport/manual covering specific topics (installation, operation, maintenance, water treatment, accessories). It specifies this is for 'Laguna Pools' and is an 'official guide,' which distinguishes it from other pool-related tools like get_pool_specs or get_pool_price. However, it doesn't explicitly mention the verb 'retrieve' or 'get,' though this is implied by the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_pool_specs or get_accessories. It lists content areas but doesn't indicate scenarios where this passport information is preferred over other data sources. There's no mention of prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pool_priceCInspect

Цены на бассейн с вариантами серий PREMIUM и PREMIUM NORD

ParametersJSON Schema
NameRequiredDescriptionDefault
modelYesМодель: laguna2..laguna9
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions price retrieval but does not cover critical aspects like data freshness, error handling, authentication needs, or rate limits. This leaves significant gaps in understanding the tool's operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, straightforward sentence that efficiently conveys the core purpose without unnecessary details. It is appropriately sized and front-loaded, though it could be slightly more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It does not explain return values, error conditions, or behavioral traits, which are essential for a tool with price data. This inadequacy impacts usability despite the simple parameter schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting the 'model' parameter as 'Модель: laguna2..laguna9'. The description adds no additional parameter details beyond this, so it meets the baseline of 3 by not detracting from the schema's information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides 'prices for pools with PREMIUM and PREMIUM NORD series options', which indicates a retrieval function but is vague about the exact action (e.g., list, fetch, calculate). It does not clearly distinguish from siblings like 'get_pool_specs' or 'search_pools', leaving ambiguity in scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'get_pool_specs' for specifications or 'search_pools' for broader pool data. The description implies usage for price queries but lacks explicit context or exclusions, offering minimal direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pool_specsCInspect

Подробные характеристики модели бассейна: размеры, объём, вес, глубина, серии, гарантия

ParametersJSON Schema
NameRequiredDescriptionDefault
modelYesМодель: laguna2..laguna9
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It implies a read-only operation by describing retrieval of specifications, but does not state whether this requires authentication, has rate limits, returns structured data, or handles errors. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that lists key attributes without unnecessary words. It is front-loaded with the core purpose, though it could be slightly more structured by explicitly stating the action (e.g., 'Retrieve detailed specifications'). Overall, it is concise and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema, no annotations), the description is minimally adequate. It covers what the tool does but lacks details on behavioral traits, usage context, and output format. Without annotations or an output schema, the description should do more to compensate, but it meets a basic threshold for a straightforward lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'model' documented as accepting values like 'laguna2..laguna9'. The description does not add any meaning beyond this, such as explaining the model format or providing examples. With high schema coverage, the baseline score of 3 is appropriate, as the schema adequately handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves detailed specifications for a pool model, listing specific attributes like dimensions, volume, weight, depth, series, and warranty. It uses a specific verb ('характеристики' implies retrieval) and resource ('модели бассейна'), but does not explicitly differentiate from siblings like 'get_pool_passport' or 'get_pool_price', which might provide overlapping or related information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, exclusions, or comparisons to sibling tools such as 'search_pools' for broader queries or 'get_pool_price' for cost information, leaving the agent to infer usage based on the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_repair_servicesDInspect

Ремонт и реконструкция бассейнов. Технология ecoFINISH: покрытия aquaBRIGHT, polyFIBRO

ParametersJSON Schema
NameRequiredDescriptionDefault
service_typeNoТип услуги
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. However, it only mentions repair services and technologies without explaining what the tool does (e.g., returns a list, provides details, or requires authentication). It lacks critical behavioral traits such as whether it's read-only, has rate limits, or what the output format might be, making it inadequate for agent understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences, but it's not front-loaded with clear purpose—it starts with vague statements about repair services and technologies. While there's no wasted text, the structure fails to immediately convey the tool's function, reducing its effectiveness. It could be more efficient by leading with the tool's action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (1 parameter with full schema coverage but no annotations or output schema), the description is incomplete. It doesn't explain what the tool returns or how to interpret results, leaving gaps in understanding. For a tool that likely retrieves service information, more context on output or usage is needed to be fully helpful to an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with a well-documented 'service_type' parameter including an enum and description. The description adds no additional meaning about parameters beyond what the schema provides, such as explaining the enum values or usage context. Since schema coverage is high, the baseline score of 3 is appropriate, as the description doesn't need to compensate but also doesn't enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Ремонт и реконструкция бассейнов. Технология ecoFINISH: покрытия aquaBRIGHT, polyFIBRO' is vague and tautological—it essentially restates the tool name 'get_repair_services' by mentioning repair and reconstruction of pools, without specifying what the tool actually does (e.g., list services, provide details, or schedule repairs). It fails to distinguish this tool from siblings like 'get_filters' or 'get_pool_specs', leaving the purpose unclear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention any context, prerequisites, or exclusions, nor does it refer to sibling tools. For example, it doesn't clarify if this is for retrieving service information versus other tools like 'get_pool_price' or 'submit_lead', leading to potential misuse.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_slidesCInspect

Каталог горок и водопадов для бассейнов

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description only states it's a catalog without explaining what that entails—whether it's a read-only list, requires authentication, has rate limits, returns structured data, or has side effects. This is inadequate for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single phrase that is concise but under-specified—it's too brief to be helpful, lacking action verbs or context. While not verbose, it fails to provide essential information, making it inefficient rather than appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations, no output schema, and a vague description, the description is incomplete. It doesn't explain what the tool returns (e.g., a list, details, or images of slides/waterfalls) or how it behaves, leaving significant gaps for an agent to understand its functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it doesn't contradict the schema. A baseline of 4 is appropriate as no parameter information is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Каталог горок и водопадов для бассейнов' (Catalog of slides and waterfalls for pools) restates the tool name 'get_slides' in different words without specifying what action it performs. It doesn't clearly state whether this retrieves, lists, searches, or displays slides/waterfalls, and doesn't distinguish from sibling tools like 'get_accessories' or 'get_furniture' that might overlap in domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention context, prerequisites, or exclusions, and doesn't reference sibling tools like 'get_accessories' or 'search_pools' that might serve similar purposes. This leaves the agent with no usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_toolsBInspect

Список всех доступных инструментов MCP сервера

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'список' (list) implies a read-only operation, the description doesn't specify whether this requires authentication, what format the output takes, or any rate limits. For a tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any fluff. It's appropriately sized for a simple tool and front-loads the essential information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, no output schema), the description is adequate but minimal. It states what the tool does but lacks details about output format or behavioral context. For a discovery tool in a server with many siblings, more guidance on usage and output expectations would be helpful, but the description meets the minimum viable threshold.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and schema description coverage is 100% (since there are no parameters to describe). With no parameters, the description doesn't need to add parameter semantics beyond what the schema provides. The baseline for zero parameters is 4, as the description appropriately doesn't discuss nonexistent parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Список всех доступных инструментов MCP сервера' (List all available MCP server tools). It uses a specific verb ('список' - list) and resource ('инструментов MCP сервера' - MCP server tools), making the function unambiguous. However, it doesn't explicitly distinguish itself from sibling tools, which are all unrelated pool-related operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, context for usage, or relationship to the sibling tools. The agent must infer that this is a meta-tool for discovering other tools, but no explicit usage instructions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_poolCInspect

AI-менеджер: подбор бассейна по описанию участка, потребностей, бюджета. Профессиональная консультация

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoГород (для подбора дилера и климата)
budgetNoБюджет (₽). Если неизвестен — не указывать
peopleNoКоличество человек для одновременного купания
purposeNoНазначение: family (семейный отдых), sport (плавание), relax (купель), bath (баня), spa (СПА-зона), commercial (коммерческий)
locationNoРазмещение: outdoor_ground (улица вровень), outdoor_above (улица над уровнем), indoor (помещение)
area_widthNoШирина участка/помещения (м)
area_lengthNoДлина участка/помещения (м)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states this is for 'подбор' (selection/recommendation) and 'консультация' (consultation), implying it's a read-only advisory tool. However, it doesn't disclose behavioral traits like whether it requires authentication, has rate limits, returns structured recommendations, or if it's a deterministic calculation versus AI-generated advice. The description is too vague about actual behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two brief phrases. The first phrase states the core purpose, and the second adds context about being a professional consultation. There's no wasted text, though it could be more structured (e.g., separating purpose from context).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., pool recommendations, dealer contacts, specifications), how results are formatted, or any limitations. For a complex recommendation tool with many inputs, this leaves significant gaps for an AI agent to understand proper usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description mentions 'описание участка, потребностей, бюджета' (site description, needs, budget) which loosely maps to parameters like area dimensions, purpose, and budget, but adds no meaningful semantics beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'подбор бассейна по описанию участка, потребностей, бюджета' (selecting a pool based on site description, needs, budget). It specifies the verb 'подбор' (selection/recommendation) and resource 'бассейна' (pool), though it doesn't explicitly differentiate from sibling tools like 'search_pools' or 'get_pool_specs'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search_pools' or 'get_pool_specs'. It mentions 'Профессиональная консультация' (professional consultation) which implies a consultative use case, but doesn't specify prerequisites, exclusions, or comparative contexts with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_poolsCInspect

Поиск композитных бассейнов по параметрам: размеры, бюджет, тип (бассейн/купель), климат

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoТип: pool (бассейн 4-9м) / plunge (купель 2-3м)
budgetNoБюджет (₽)
climateNoКлимат: warm (стандарт) / cold (PREMIUM NORD с утеплением)
max_lengthNoМакс. длина (м)
min_lengthNoМин. длина (м)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the search action but doesn't describe what the search returns (e.g., list of pools, detailed specs, or just IDs), whether it's paginated, rate-limited, or requires authentication. For a search tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence in Russian that front-loads the core purpose and lists key parameters. It avoids unnecessary words and gets straight to the point, though it could be slightly more structured (e.g., separating purpose from parameter list).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a search tool with 5 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., pool objects, IDs, or just counts), how results are formatted, or any behavioral aspects like error handling. This leaves the agent guessing about the output and usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description lists the search parameters (sizes, budget, type, climate), which aligns with the 5 parameters in the schema. Since schema description coverage is 100%, the schema already documents each parameter thoroughly (e.g., enum values for type and climate with explanations). The description adds no additional meaning beyond what the schema provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching for composite pools using specific parameters (sizes, budget, type, climate). It uses the verb 'search' with the resource 'composite pools' and lists the search criteria. However, it doesn't explicitly differentiate from sibling tools like 'recommend_pool' or 'get_pool_specs', which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'recommend_pool' and 'get_pool_specs' available, there's no indication of when this search tool is preferred, what prerequisites exist, or any exclusions. Usage is implied through the parameter list but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_leadBInspect

Отправить заявку на консультацию/заказ дилеру Laguna Pools. Клиент получит обратный звонок

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoГород
nameYesИмя клиента
phoneYesТелефон клиента
messageNoКомментарий (модель, комплектация, вопросы)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that submitting a lead results in a callback to the client, which is useful context. However, it doesn't disclose critical behavioral traits such as whether this is a read-only or mutation operation (implied mutation from 'submit'), authentication requirements, rate limits, error handling, or what happens after submission (e.g., confirmation, status tracking). For a mutation tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded in a single sentence that states the core purpose, followed by a brief outcome statement. There's no wasted text, and it efficiently communicates the essential information. However, it could be slightly more structured by separating purpose from outcome more clearly, but it's still highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a lead submission with 4 parameters, no output schema, and no annotations), the description is minimally complete. It covers the purpose and outcome but lacks details on behavioral aspects, error handling, and integration with sibling tools. Without annotations or output schema, the description should do more to compensate, but it provides enough context for basic usage, making it adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with clear descriptions for all 4 parameters (city, name, phone, message) in the input schema. The description doesn't add any parameter-specific information beyond what's already documented in the schema (e.g., it doesn't clarify format for phone numbers or length limits for message). With high schema coverage, the baseline is 3, as the description doesn't compensate with extra semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Отправить заявку на консультацию/заказ дилеру Laguna Pools' (Submit a consultation/order request to Laguna Pools dealer). It specifies the action (submit), resource (lead/request), and outcome (client will receive a callback). However, it doesn't explicitly differentiate from sibling tools like 'find_dealer' or 'recommend_pool', which reduces it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context: when a client wants to request consultation or order from Laguna Pools. It mentions the outcome ('Клиент получит обратный звонок' - client will receive a callback), which suggests when this tool is appropriate. However, it doesn't provide explicit guidance on when to use alternatives like 'find_dealer' for locating dealers or 'recommend_pool' for recommendations, nor does it specify exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.