MCP Market Russia
Server Details
MCP server for Russian construction market data. Access 3,395 construction companies, 13,436 projects across 18 regions of Russia. Search contractors, compare prices, analyze ratings, get market reports — all via MCP protocol. 21 tools including search, analytics, cost estimation, and contractor recommendations.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.4/5 across 21 of 21 tools scored. Lowest: 2.6/5.
Most tools have distinct purposes, such as calculate_cost for cost estimation, company_deep_profile for detailed company data, and search_companies for finding companies. However, there is some overlap between company_deep_profile and company_portfolio, which both provide comprehensive company information, potentially causing confusion. Tools like market_analytics and market_report also have similar scopes, but their descriptions help differentiate them slightly.
The naming follows a consistent snake_case pattern throughout, with clear verb_noun structures like get_categories, search_projects, and request_quote. There are minor deviations, such as company_deep_profile and contractor_recommendation, which use adjectives or compound terms but still maintain readability and a similar style.
With 21 tools, the count is slightly high but reasonable for a comprehensive construction market server covering cost estimation, company profiles, market analysis, and project searches. It supports various workflows like research, comparison, and lead generation without feeling overly bloated, though it borders on the heavy side.
The tool set provides complete coverage for the Russian construction market domain, including CRUD-like operations such as get_company, search_companies, and request_quote. It supports cost estimation, market analytics, company comparisons, project searches, and lead generation, with no obvious gaps that would hinder agent workflows for market research or contractor selection.
Available Tools
21 toolscalculate_costAInspect
Calculate estimated construction cost based on real market data from the catalog. Uses average price per m² by material and region from actual company prices and projects.
Args: area: House area in square meters (required, e.g. 120) material: Building material (каркас/frame, брус/timber, газобетон/aerated_concrete, кирпич/brick, СИП/SIP). Empty = average across all. region: Region or city name for regional pricing. Empty = nationwide average. floors: Number of floors (1 or 2). 0 = no adjustment.
| Name | Required | Description | Default |
|---|---|---|---|
| area | Yes | ||
| floors | No | ||
| region | No | ||
| material | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by explaining the tool's data sources ('real market data from the catalog', 'actual company prices and projects'), calculation methodology ('average price per m²'), and default behaviors for empty parameters. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with a clear purpose statement followed by well-organized parameter documentation. Every sentence earns its place - the first paragraph establishes context, the Args section efficiently documents each parameter without redundancy. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, no annotations, but with output schema present, the description is complete enough. It explains what the tool does, how it works, and all parameter meanings. The output schema will handle return value documentation, so the description appropriately focuses on inputs and behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed semantics for all 4 parameters: explains required status, units, enumerated values with translations, regional scope implications, and special meanings for default values (0 floors = no adjustment, empty = averages). This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('calculate estimated construction cost') and resource ('based on real market data from the catalog'). It distinguishes itself from siblings by focusing on cost calculation rather than company analysis, project retrieval, or market reporting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through parameter explanations (e.g., 'Empty = average across all'), but lacks explicit guidance on when to use this tool versus alternatives like 'project_estimator' or 'price_comparison'. No when-not-to-use scenarios or clear sibling differentiation is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_deep_profileCInspect
Get comprehensive company profile with all available data - contacts, projects, pricing, reviews analysis, market position, and comparison with competitors in same region.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It implies a read-only operation ('Get'), but doesn't disclose behavioral traits such as authentication needs, rate limits, data freshness, or whether it's a heavy/compute-intensive query. The mention of 'all available data' suggests comprehensiveness but lacks specifics on limitations or performance.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the core purpose in the first phrase. It uses a single sentence with a dash to list data types efficiently, though the list could be slightly trimmed for brevity without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (comprehensive profile with multiple data types) and no annotations, the description is incomplete—it lacks behavioral details and parameter guidance. However, an output schema exists, so return values needn't be explained. This partially mitigates gaps, but overall completeness is minimal viable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description adds no parameter semantics beyond implying the 'slug' identifies a company. It doesn't explain what a slug is, its format, or how to obtain it, leaving the single required parameter undocumented in both schema and description. Baseline 3 applies as schema coverage is low, but the description fails to compensate adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get comprehensive company profile') and lists the types of data included (contacts, projects, pricing, etc.). However, it doesn't explicitly differentiate from sibling tools like 'get_company' or 'company_portfolio', which might offer similar or overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_company', 'company_portfolio', 'compare_companies', and 'market_report', there's no indication of how this tool differs in scope or when it's preferred over others, leaving usage decisions ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_portfolioBInspect
Get FULL company portfolio: details, all projects, prices, reviews, contacts. Comprehensive dossier for due diligence or hiring decisions. Args: company_slug: Company slug identifier (from search results).
| Name | Required | Description | Default |
|---|---|---|---|
| company_slug | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the tool retrieves a 'FULL company portfolio' and is for 'due diligence or hiring decisions', but it doesn't disclose behavioral traits such as whether this is a read-only operation, potential rate limits, authentication needs, or what the output looks like. The description is vague about the tool's behavior beyond its basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded. The first sentence clearly states the purpose, the second adds usage context, and the third explains the parameter. There's no wasted text, and the structure is logical, though it could be slightly more polished (e.g., formatting the 'Args' section better).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 1 parameter, no annotations, and an output schema exists, the description is moderately complete. It covers the purpose, usage context, and parameter semantics, but it lacks details on behavioral aspects like safety, performance, or output format. The output schema mitigates some gaps, but for a tool with no annotations, more behavioral transparency would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaning beyond the input schema. The schema has 1 parameter with 0% description coverage, and the description explains 'company_slug: Company slug identifier (from search results).' This clarifies the parameter's purpose and source, compensating for the low schema coverage. Since there's only 1 parameter, the baseline is high, and the description provides useful context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get FULL company portfolio: details, all projects, prices, reviews, contacts.' It specifies the verb ('Get') and resource ('company portfolio') with scope ('FULL'). However, it doesn't explicitly differentiate from sibling tools like 'get_company' or 'company_deep_profile', which likely serve similar purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context: 'Comprehensive dossier for due diligence or hiring decisions.' This suggests when to use it, but it doesn't provide explicit guidance on when to choose this tool over alternatives like 'get_company' or 'company_deep_profile'. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_companiesBInspect
Compare 2-3 construction companies side by side on prices, ratings, number of projects, and specialization.
Args: company_ids: Comma-separated company UUIDs to compare (2-3 IDs). Example: 'uuid1,uuid2,uuid3'
| Name | Required | Description | Default |
|---|---|---|---|
| company_ids | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions what attributes are compared but doesn't disclose behavioral traits such as whether this is a read-only operation, if it requires authentication, rate limits, or how the comparison is performed (e.g., algorithmic details). The description is minimal and lacks context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by a structured 'Args' section with details. There's minimal waste, though the example could be integrated more seamlessly. It efficiently conveys key information in two sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 1 parameter with 0% schema coverage and an output schema exists, the description is moderately complete. It covers the parameter semantics adequately but lacks behavioral context (no annotations) and doesn't explain the comparison output format, though the output schema may handle that. For a tool with no annotations and simple input, it's adequate but has gaps in usage and transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining 'company_ids' as comma-separated UUIDs for 2-3 companies, including an example. This clarifies the parameter's format and constraints beyond the schema's basic string type. However, it doesn't detail where to obtain these UUIDs or if they must be valid/active companies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: comparing construction companies on specific attributes (prices, ratings, number of projects, specialization). It specifies the verb 'compare' and resource 'construction companies' with concrete comparison dimensions. However, it doesn't explicitly differentiate from sibling tools like 'company_deep_profile' or 'price_comparison', which might offer overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying '2-3 construction companies' and listing comparison attributes, suggesting it's for side-by-side analysis. However, it lacks explicit guidance on when to use this tool versus alternatives like 'company_deep_profile' (for in-depth single company info) or 'price_comparison' (which might focus solely on pricing). No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contractor_recommendationCInspect
AI-powered contractor recommendation. Finds the best matching companies based on budget, region, quality requirements. Returns ranked list with match scores.
| Name | Required | Description | Default |
|---|---|---|---|
| region | No | ||
| category | No | ||
| budget_max | No | ||
| budget_min | No | ||
| min_rating | No | ||
| need_contacts | No | ||
| need_portfolio | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions 'AI-powered' and returns 'ranked list with match scores', which adds some behavioral context. However, it lacks critical details: whether this is a read-only operation, if it requires authentication, rate limits, or how the ranking algorithm works. For a tool with no annotations, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded: two sentences that efficiently convey the core functionality. Every sentence earns its place by stating the purpose and output. No unnecessary details or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, no annotations, but with output schema), the description is moderately complete. It explains the purpose and output format ('ranked list with match scores'), which aligns with the output schema. However, it lacks usage guidelines and detailed parameter explanations, leaving gaps for an AI agent to understand when and how to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'budget, region, quality requirements' which loosely maps to parameters like 'budget_min/max', 'region', and 'min_rating'. However, it doesn't explain the 7 parameters fully (e.g., 'category', 'need_contacts', 'need_portfolio' are unmentioned). The description adds minimal value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'AI-powered contractor recommendation. Finds the best matching companies based on budget, region, quality requirements.' It specifies the verb ('finds'), resource ('companies'), and key criteria. However, it doesn't explicitly differentiate from sibling tools like 'find_best_companies' or 'search_companies', which appear similar.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools like 'find_best_companies', 'search_companies', and 'compare_companies', there's no indication of how this tool differs or when it's preferred. The description only states what it does, not when to choose it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_best_companiesAInspect
Smart lead generation: find the best construction companies matching your criteria. Perfect for finding contractors, generating leads, or market research. Args: region: Filter by region name (e.g. 'Москва'). Empty = all. category: Filter by category/subcategory. Empty = all. min_rating: Minimum rating (0-5). 0 = no filter. max_price: Maximum price per m2 in thousands RUB. 0 = no filter. min_price: Minimum price per m2 in thousands RUB. 0 = no filter. has_phone: Only companies with phone number. has_projects: Only companies with project portfolio. sort_by: Sort by: 'rating', 'price_asc', 'price_desc', 'reviews', 'projects'. limit: Max results (1-50).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| region | No | ||
| sort_by | No | rating | |
| category | No | ||
| has_phone | No | ||
| max_price | No | ||
| min_price | No | ||
| min_rating | No | ||
| has_projects | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the tool as a 'smart lead generation' tool for filtering and sorting, but lacks details on behavioral traits such as rate limits, authentication needs, pagination, or what happens with invalid inputs. For a tool with 9 parameters and no annotations, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It starts with a high-level purpose, provides usage context, and then details parameters in a clear list. Every sentence earns its place, but the parameter section could be slightly more concise (e.g., by grouping related filters). Overall, it's efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, no annotations, but with an output schema), the description is fairly complete. It covers purpose, usage, and parameter semantics thoroughly. Since an output schema exists, it doesn't need to explain return values. However, it lacks behavioral details (e.g., error handling or performance), which holds it back from a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It does so by listing all 9 parameters with clear semantics: e.g., 'region: Filter by region name (e.g. 'Москва'). Empty = all.' and 'sort_by: Sort by: 'rating', 'price_asc', 'price_desc', 'reviews', 'projects'.' This adds essential meaning beyond the bare schema, making it easy for an agent to understand each parameter's purpose and usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Smart lead generation: find the best construction companies matching your criteria.' It specifies the verb ('find') and resource ('construction companies'), and distinguishes it from siblings like 'search_companies' by emphasizing 'best' and 'matching your criteria.' However, it doesn't explicitly differentiate from 'contractor_recommendation' or 'compare_companies,' which keeps it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context: 'Perfect for finding contractors, generating leads, or market research.' This gives practical scenarios for when to use the tool. However, it doesn't explicitly state when not to use it or name alternatives among siblings (e.g., 'search_companies' for broader searches), so it falls short of a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_categoriesBInspect
Get all company categories with the number of companies in each category.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a read operation ('Get') but doesn't mention permissions, rate limits, pagination, sorting, or what happens when no categories exist. For a tool with zero annotation coverage, this leaves significant behavioral questions unanswered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core purpose without unnecessary words. It's appropriately sized for a zero-parameter tool and front-loads the essential information. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has zero parameters, 100% schema coverage, and an output schema exists, the description is reasonably complete for its complexity level. However, with no annotations and no behavioral context in the description, there are gaps in understanding how the tool behaves operationally. The existence of an output schema means the description doesn't need to explain return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters with 100% schema description coverage, so the schema already fully documents the input requirements. The description appropriately doesn't add parameter information since none exist. It does provide useful context about what data is returned (categories with company counts), which adds value beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('company categories') with additional context about what data is returned ('with the number of companies in each category'). It distinguishes itself from siblings like 'get_company' or 'get_regions' by focusing specifically on categories. However, it doesn't specify if this is a filtered list or comprehensive retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There's no mention of when this tool is appropriate versus other category-related tools (none exist in siblings) or when to use this versus broader tools like 'search_companies' or 'get_stats'. The agent must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_companyAInspect
Get full company profile including contacts, prices, rating, reviews, and list of house projects.
Args: company_id: Company UUID from search_companies results
| Name | Required | Description | Default |
|---|---|---|---|
| company_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states what data is retrieved but doesn't disclose behavioral traits such as permissions needed, rate limits, error handling, or whether it's a read-only operation. The description is functional but lacks critical operational context for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a concise Args section. Every sentence earns its place by providing essential information without redundancy, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, retrieval-focused), the description covers the purpose and parameter semantics adequately. The presence of an output schema reduces the need to explain return values. However, without annotations, it lacks behavioral transparency, slightly impacting completeness for safe agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates by explaining 'company_id' as a 'Company UUID from search_companies results', adding meaningful context beyond the bare schema. This clarifies the parameter's origin and format, though it doesn't detail validation or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('full company profile') with specific content details (contacts, prices, rating, reviews, house projects). It distinguishes from siblings like 'search_companies' (searching) and 'company_deep_profile' (deep profiling) by focusing on a comprehensive single-company retrieval. However, it doesn't explicitly contrast with 'company_portfolio' which might overlap.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying the source of 'company_id' from 'search_companies results', suggesting a workflow. However, it lacks explicit when-to-use guidance versus alternatives like 'company_deep_profile' or 'company_portfolio', and doesn't mention prerequisites or exclusions, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_projectAInspect
Get detailed house project information including specifications, price, features, and company contacts.
Args: project_id: Project UUID from search_projects results
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes what information is retrieved (specifications, price, features, company contacts) but doesn't disclose behavioral traits like whether this is a read-only operation, potential rate limits, authentication requirements, or error handling. The description adds basic context about the data returned but lacks operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence clearly states the tool's purpose, and the second sentence provides essential parameter guidance. There's no wasted text, and the structure separates general description from parameter details efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, read operation), the description is reasonably complete. It explains what information is retrieved and provides parameter semantics. Since an output schema exists, the description doesn't need to detail return values. The main gap is lack of behavioral transparency, but overall it covers the essentials for this type of lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context for the single parameter: it explains that 'project_id' is a 'Project UUID from search_projects results.' This clarifies the parameter's purpose and source beyond what the schema provides (which has 0% description coverage and only states it's a required string). Since there's only one parameter and the description compensates well for the low schema coverage, this earns a high score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get detailed house project information including specifications, price, features, and company contacts.' It specifies the verb 'Get' and resource 'detailed house project information' with concrete examples of what information is included. However, it doesn't explicitly differentiate from sibling tools like 'search_projects' or 'project_estimator' beyond mentioning the project_id source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some implied usage guidance by stating 'Project UUID from search_projects results,' suggesting this tool should be used after obtaining a project ID from search_projects. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_company' or 'company_portfolio,' nor does it provide clear exclusions or prerequisites beyond the ID requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_regionsAInspect
Get all available regions with the number of companies in each region.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions what data is returned (regions with company counts) but lacks behavioral details such as whether this is a read-only operation, if there are rate limits, how data is formatted, or if authentication is required. The description is minimal and does not compensate for the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded with the main action and includes all relevant details concisely.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters, an output schema exists, and annotations are absent, the description is adequate but minimal. It specifies the return data (regions with company counts), which helps, but lacks context on behavioral aspects like safety or performance. The output schema may cover return values, but the description could be more complete for a tool with no annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description adds no parameter information, which is acceptable here as there are no parameters to describe. Baseline is 4 for zero parameters, as no additional semantics are needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get all available regions') and the resource ('regions'), including the additional data returned ('with the number of companies in each region'). It distinguishes from siblings like 'region_comparison' by focusing on listing rather than comparing regions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving region data with company counts, but does not explicitly state when to use this tool versus alternatives like 'region_comparison' or 'market_analytics'. No guidance on prerequisites or exclusions is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_statsBInspect
Get catalog statistics: total companies, projects, regions, categories, agent queries today, and leads generated.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It states what data is returned but doesn't mention whether this is a read-only operation, if it requires authentication, rate limits, or how current the statistics are (e.g., real-time vs. cached). For a stats tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get catalog statistics') followed by specific metrics. Every word earns its place with no redundancy or unnecessary elaboration, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, output schema exists), the description is reasonably complete. It specifies the exact metrics returned, which complements the output schema. However, without annotations, it could better address behavioral aspects like data freshness or access requirements to be fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100% (empty schema). The description doesn't need to add parameter information, so it appropriately focuses on output semantics by listing the specific metrics returned. This exceeds the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get catalog statistics' with specific metrics listed (total companies, projects, regions, etc.). It uses a specific verb ('Get') and identifies the resource ('catalog statistics'), but doesn't explicitly distinguish it from sibling tools like 'market_analytics' or 'trend_analyzer' that might also provide statistical data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or compare it to sibling tools like 'market_report' or 'trend_analyzer' that might offer overlapping functionality. The user must infer usage from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_analyticsBInspect
Get comprehensive market analytics for the Russian construction market. Returns: average prices, top companies by rating, market size, price distribution. Perfect for investors, analysts, and companies entering the market. Args: region: Filter by region (e.g. 'Москва', 'Санкт-Петербург'). Empty = all regions. category: Filter by category (e.g. 'Строительство домов'). Empty = all categories.
| Name | Required | Description | Default |
|---|---|---|---|
| region | No | ||
| category | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool 'Returns: average prices, top companies by rating, market size, price distribution' but does not disclose behavioral traits like data freshness, rate limits, authentication needs, or whether it's a read-only operation. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by return values, usage context, and parameter details. Each sentence adds value, though the 'Perfect for...' line could be more tightly integrated. Overall, it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (market analytics with filtering), no annotations, and an output schema (which covers return values), the description is moderately complete. It explains parameters well but lacks behavioral details like data sources or limitations. With an output schema, it doesn't need to detail return values, but other gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics beyond the schema: it explains that 'region' and 'category' are filters with examples (e.g., 'Москва', 'Строительство домов') and clarifies that empty values mean 'all regions' or 'all categories'. This provides essential context not in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get comprehensive market analytics for the Russian construction market' with specific return values (average prices, top companies, etc.). It distinguishes from siblings like 'market_report' or 'get_stats' by specifying the construction market focus and analytics scope, though not explicitly contrasting them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('Perfect for investors, analysts, and companies entering the market') but does not explicitly state when to use this tool versus alternatives like 'market_report' or 'get_stats'. It provides general audience guidance but lacks specific differentiation from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_reportAInspect
Generate a comprehensive market report for a specific region. Includes: market size, price tiers, top players, contact availability, competitive landscape. Perfect for investors, business development, and market entry analysis. Args: region: Region name (e.g. 'Москва', 'Ленинградская область').
| Name | Required | Description | Default |
|---|---|---|---|
| region | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation (report generation) but does not specify authentication needs, rate limits, or data freshness. The mention of 'comprehensive' suggests depth, but lacks concrete behavioral details like processing time or output format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by usage context and parameter details. It uses bullet-like formatting for components, but the 'Args:' section could be integrated more smoothly. Overall, it's efficient with minimal fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (generating a comprehensive report) and the presence of an output schema (which covers return values), the description is reasonably complete. It outlines the report's components and usage scenarios, though it could benefit from more behavioral details like data sources or limitations to fully compensate for the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaningful context for the single parameter 'region' by providing examples ('Москва', 'Ленинградская область') and clarifying it's a region name, which goes beyond the schema's basic string type. However, it does not detail constraints like valid region formats or boundaries.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Generate a comprehensive market report') and resource ('for a specific region'), with detailed components listed (market size, price tiers, etc.). It distinguishes from sibling tools like 'market_analytics' or 'region_comparison' by focusing on report generation rather than analysis or comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Perfect for investors, business development, and market entry analysis'), which helps differentiate it from general analytics tools. However, it does not explicitly state when not to use it or name specific alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
price_comparisonBInspect
Compare construction prices across regions and categories. Returns detailed price statistics, percentiles, and regional rankings. Args: regions: Comma-separated regions to compare (e.g. 'Москва,Санкт-Петербург'). Empty = all. category: Filter by category. Empty = all.
| Name | Required | Description | Default |
|---|---|---|---|
| regions | No | ||
| category | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool 'Returns detailed price statistics, percentiles, and regional rankings,' which gives some insight into output behavior. However, it lacks critical details such as data sources, update frequency, rate limits, or error handling, which are important for a data comparison tool with no structured annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and concise, with three sentences that efficiently cover purpose, output, and parameters. It front-loads the core functionality and avoids unnecessary details, making it easy to parse. However, the parameter explanations could be slightly more integrated into the flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there is an output schema (which handles return values), no annotations, and low schema coverage, the description is moderately complete. It covers purpose and parameters adequately but lacks behavioral context like data freshness or limitations. For a tool with 2 parameters and no annotations, it meets minimum viability but has clear gaps in usage guidance and transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics: it explains that 'regions' is a comma-separated list with an example ('Москва,Санкт-Петербург') and that empty means 'all,' and similarly for 'category.' This clarifies usage beyond the bare schema, though it could provide more details like valid region or category values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Compare construction prices across regions and categories.' It specifies the verb 'compare' and the resources 'construction prices,' 'regions,' and 'categories.' However, it does not explicitly differentiate from sibling tools like 'region_comparison' or 'market_analytics,' which might have overlapping functions, so it falls short of a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'region_comparison,' 'market_analytics,' and 'get_stats,' there is no indication of specific contexts, prerequisites, or exclusions for using 'price_comparison.' This lack of comparative guidance limits its utility in tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
project_estimatorBInspect
Estimate construction project cost based on area, region, category and quality level (economy/standard/premium). Uses real market data from our database.
| Name | Required | Description | Default |
|---|---|---|---|
| region | No | ||
| quality | No | standard | |
| area_sqm | Yes | ||
| category | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions using 'real market data from our database', which hints at data sources but doesn't disclose critical behavioral traits like whether this is a read-only operation, if it requires authentication, rate limits, or what happens with invalid inputs. For a cost estimation tool with no annotations, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. The second sentence adds valuable context about data sources without redundancy. Every sentence earns its place, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage, no annotations, but an output schema exists, the description is moderately complete. It covers the purpose and key inputs but lacks behavioral details and usage guidelines. The output schema likely handles return values, so the description doesn't need to explain those, but it should do more for a tool with multiple parameters and no annotation support.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description lists the parameters (area, region, category, quality level) and provides some context for quality levels (economy/standard/premium). However, with 0% schema description coverage and 4 parameters (1 required), it doesn't fully compensate by explaining parameter formats, valid regions/categories, or unit expectations (e.g., area_sqm in square meters). The baseline is 3 since it adds some meaning but not comprehensive details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Estimate construction project cost based on area, region, category and quality level.' It specifies the verb (estimate), resource (construction project cost), and key parameters. However, it doesn't explicitly differentiate from sibling tools like 'calculate_cost' or 'price_comparison', which likely have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'calculate_cost', 'price_comparison', and 'market_analytics', there's no indication of what makes this tool distinct or when it should be preferred. The mention of 'real market data from our database' is a feature but not a usage guideline.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
region_comparisonCInspect
Compare construction markets across regions. Provide comma-separated region names. Shows companies count, ratings, prices, contact availability for each region side by side.
| Name | Required | Description | Default |
|---|---|---|---|
| regions | Yes | ||
| category | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions what data is shown (companies count, ratings, etc.) but doesn't cover critical aspects like whether this is a read-only operation, potential rate limits, authentication requirements, data freshness, or how results are formatted. For a comparison tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: first states purpose and input format, second lists output metrics. No wasted words, though it could be slightly more front-loaded by mentioning the side-by-side comparison earlier. Overall appropriately concise for its information content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), the description's main gaps are parameter documentation and behavioral context. It adequately explains the core purpose and output metrics, but with 0% schema coverage and no annotations, it should do more to explain the 'category' parameter and usage constraints. For a comparison tool with output schema, this is minimally adequate but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It only mentions 'regions' parameter ('Provide comma-separated region names') but completely ignores the 'category' parameter. With 2 parameters and no schema descriptions, the description fails to explain what 'category' does or how it affects the comparison, leaving half the parameters unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Compare construction markets across regions' with specific outputs (companies count, ratings, prices, contact availability). It distinguishes from siblings like 'compare_companies' by focusing on regional markets rather than individual companies. However, it doesn't explicitly contrast with 'market_analytics' or 'market_report', leaving some sibling differentiation incomplete.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance: 'Provide comma-separated region names' indicates how to format input, but offers no context on when to use this tool versus alternatives like 'market_analytics', 'market_report', or 'get_regions'. There's no mention of prerequisites, limitations, or specific scenarios where this comparison is most valuable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
request_quoteAInspect
Send a quote request to a construction company on behalf of the user. Returns confirmation with lead ID and company contact details.
Args: company_id: Target company UUID (required) project_id: Specific project UUID if the user is interested in a particular house project name: Client's name for the quote request phone: Client's phone number for callback email: Client's email address comment: Additional comments or requirements for the quote
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| No | |||
| phone | No | ||
| comment | No | ||
| company_id | Yes | ||
| project_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the action ('Send a quote request') and return values ('confirmation with lead ID and company contact details'), which is helpful. However, it doesn't mention behavioral aspects like rate limits, authentication requirements, error conditions, or whether this is a write operation that creates a lead.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two purpose sentences followed by a structured parameter list. Every sentence adds value, though the parameter section could be slightly more concise by grouping related fields (contact info).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 0% schema coverage and no annotations, the description does well by explaining all parameters and mentioning return values. Since an output schema exists, it doesn't need to detail return structure. However, for a tool that likely performs a write operation (sending requests), more behavioral context would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It provides clear semantic explanations for all 6 parameters beyond just their names, including which is required ('company_id: Target company UUID (required)') and contextual meanings (e.g., 'project_id: Specific project UUID if the user is interested in a particular house project').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Send a quote request') and target ('to a construction company on behalf of the user'), distinguishing it from sibling tools like 'calculate_cost' or 'company_portfolio' which serve different purposes. It specifies the verb+resource combination explicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (sending quote requests to construction companies) but doesn't explicitly state when to use this tool versus alternatives like 'compare_companies' or 'contractor_recommendation'. No guidance on prerequisites or exclusions is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
review_analysisBInspect
Analyze company reviews - sentiment breakdown, common themes, strengths and weaknesses. Provide company_slug for specific company or region/category for market overview.
| Name | Required | Description | Default |
|---|---|---|---|
| region | No | ||
| category | No | ||
| company_slug | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but reveals minimal behavioral traits. It mentions what the analysis includes (sentiment, themes, strengths/weaknesses) but doesn't disclose permissions needed, data sources, rate limits, processing time, or whether this is a read-only operation. The description doesn't contradict annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that efficiently cover purpose and parameter usage. It's front-loaded with the core functionality, though the second sentence could be slightly clearer about the mutual exclusivity of parameter modes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with 0% schema coverage, no annotations, but an output schema exists, the description provides basic contextual information but is incomplete. It explains what the tool does and parameter purposes but lacks behavioral details, error conditions, and doesn't fully compensate for the missing schema descriptions. The output schema reduces but doesn't eliminate the need for more context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains the semantic purpose of parameters (company_slug for specific company, region/category for market overview) but doesn't specify format, constraints, or how they interact. The description adds meaningful context beyond the bare schema but leaves significant gaps about parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: analyzing company reviews with specific outputs (sentiment breakdown, common themes, strengths and weaknesses). It distinguishes between two modes (specific company vs. market overview) but doesn't explicitly differentiate from sibling tools like 'market_analytics' or 'trend_analyzer' which might overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use each parameter mode (company_slug for specific company, region/category for market overview), but doesn't provide explicit guidance on when to choose this tool over alternatives like 'market_analytics' or 'company_deep_profile'. No exclusion criteria or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_companiesAInspect
Search Russian construction companies by category, region, and budget. Returns company name, rating, prices, website, and phone number.
Args: query: Free text search query (e.g. 'каркасные дома недорого', 'frame houses') category: Company category filter (каркасные_дома, дома_из_бруса, газобетон, кирпич, недвижимость, модульные_дома, СИП) region: Region or city name (e.g. 'Московская область', 'Санкт-Петербург', 'Краснодар') budget_max: Maximum budget in rubles. Set to 0 for no limit. limit: Number of results to return, maximum 20
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| region | No | ||
| category | No | ||
| budget_max | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the return format (company name, rating, prices, website, phone) and maximum result limit (20), which are valuable behavioral traits. However, it doesn't mention pagination, rate limits, authentication needs, or what happens with empty results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose statement first, then return format, then detailed parameter explanations. Every sentence adds value. Could be slightly more concise by combining some parameter explanations, but overall efficient with no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with 0% schema coverage and no annotations, the description does excellent work explaining parameters and return format. The existence of an output schema reduces need to fully document returns. Missing only some behavioral context like pagination or error handling for a complete picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must fully compensate. It provides excellent parameter semantics: explains each parameter's purpose, gives examples for query, category, and region, clarifies budget_max=0 means no limit, and specifies limit maximum. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches Russian construction companies with specific filters (category, region, budget) and returns detailed company information. It distinguishes from siblings like 'get_company' (single company), 'find_best_companies' (likely ranked), and 'search_projects' (projects not companies).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching companies with filtering criteria, but doesn't explicitly state when to use this versus alternatives like 'find_best_companies' or 'get_company'. It provides clear context about what can be searched but lacks explicit exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_projectsAInspect
Search house building projects by area, floors, material, and price. Returns project specifications, price, direct link, and company contacts.
Args: area_min: Minimum house area in square meters. Set to 0 for no limit. area_max: Maximum house area in square meters. Set to 0 for no limit. floors: Number of floors/stories. Set to 0 for any. material: Building material filter (каркас/frame, брус/timber, газобетон/aerated_concrete, кирпич/brick, СИП/SIP) budget_max: Maximum price in rubles. Set to 0 for no limit. region: Filter by company region or city name query: Free text search in project name and description limit: Number of results to return, maximum 20
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| floors | No | ||
| region | No | ||
| area_max | No | ||
| area_min | No | ||
| material | No | ||
| budget_max | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses return content (specifications, price, link, contacts) and a behavioral limit (maximum 20 results), which is valuable. However, it doesn't cover other important traits like pagination, error handling, authentication needs, rate limits, or whether it's read-only or has side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: first sentence states purpose and key parameters, second describes returns, then a structured Args section details each parameter. Every sentence earns its place, though the material list could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters with 0% schema coverage and an output schema present, the description does well: it explains all parameters thoroughly and mentions return content. However, it doesn't address behavioral aspects like pagination or error handling, and with no annotations, it could better clarify safety and operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It provides detailed semantics for all 8 parameters: explains each parameter's purpose, units (square meters, rubles), special values (0 for no limit), material options with translations, and constraints (maximum 20 for limit). This adds significant meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Search') and resource ('house building projects'), listing key search criteria. It distinguishes from siblings like 'search_companies' by focusing on projects rather than companies, and from 'get_project' by being a search rather than a direct retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the listed search parameters (area, floors, material, price, region), suggesting when to use it for filtering projects. However, it doesn't explicitly state when to choose this tool over alternatives like 'search_companies' or 'get_project', nor does it mention exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trend_analyzerCInspect
Analyze market trends - company growth, price dynamics, rating changes by region/category. Shows how the construction market is developing over time.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | all | |
| region | No | ||
| category | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions analyzing trends and showing development over time, but lacks details on permissions, rate limits, data sources, or whether it's read-only or has side effects. For a tool with 3 parameters and no annotations, this is insufficient to inform safe and effective usage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the main purpose in the first sentence, followed by additional context. It avoids unnecessary verbosity, but could be more structured by explicitly listing parameter roles or usage scenarios to improve clarity without sacrificing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with 0% schema coverage, no annotations, but an output schema exists, the description is moderately complete. It covers the tool's purpose and high-level inputs but lacks details on behavior, parameter usage, and differentiation from siblings. The output schema likely handles return values, reducing the burden, but overall it's adequate with clear gaps for a trend analysis tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'by region/category' and 'over time', hinting at the 'region', 'category', and 'period' parameters, but doesn't explain their formats, allowed values, or how they affect the analysis. This adds minimal semantic value beyond the schema's basic structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Analyze market trends' and specifies aspects like 'company growth, price dynamics, rating changes by region/category', which gives a general purpose. However, it's vague about the exact output format and doesn't clearly distinguish from siblings like 'market_analytics' or 'market_report', which might have overlapping functions. The phrase 'Shows how the construction market is developing over time' adds context but remains broad.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description implies usage for analyzing market trends in construction, but it doesn't specify prerequisites, exclusions, or compare to siblings such as 'market_analytics' or 'region_comparison'. This leaves the agent without clear direction on tool selection in context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!