Skip to main content
Glama

Cenogram - Polish Real Estate Data

Server Details

7M+ real estate transactions from Poland's RCN registry. Search, compare, and analyze prices.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
cenogram/mcp-server
GitHub Stars
1
Server Listing
cenogram-mcp-server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 9 of 9 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes with clear boundaries: compare_locations (comparison), get_market_overview (aggregate stats), get_price_distribution (histogram), get_price_statistics (price/m²), list_locations (location listing), search_by_area (radius search), search_by_polygon (polygon search), search_parcels (parcel lookup), and search_transactions (transaction search). However, get_price_statistics and search_transactions have some overlap in providing price data for locations, though their scopes differ (statistics vs. detailed transactions).

Naming Consistency5/5

All tools follow a consistent verb_noun naming pattern: compare_locations, get_market_overview, get_price_distribution, get_price_statistics, list_locations, search_by_area, search_by_polygon, search_parcels, and search_transactions. The verbs (compare, get, list, search) are used appropriately and consistently, with no mixing of conventions like camelCase or snake_case variations.

Tool Count5/5

With 9 tools, this server is well-scoped for its domain of Polish real estate data. Each tool serves a specific and necessary function, from high-level overviews (get_market_overview) to detailed searches (search_transactions) and geographic queries (search_by_area, search_by_polygon), without feeling bloated or sparse.

Completeness4/5

The toolset provides comprehensive coverage for querying and analyzing real estate data, including location listing, transaction searching, geographic searches, price analysis, and market overviews. A minor gap is the lack of tools for updating or managing data (e.g., CRUD operations), but this is reasonable for a read-only data server focused on retrieval and analysis, with no dead ends in the workflows.

Available Tools

9 tools
compare_locationsA
Read-only
Inspect

Compare real estate statistics across multiple locations side-by-side. Provide 2-5 district names to compare median price/m², average area, and transaction counts. Use list_locations first to find valid location names. Requires at least one filter besides districts (e.g., propertyType). Example: compare Mokotów, Wola, Ursynów for apartments.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateToNoEnd date (YYYY-MM-DD)
streetNoStreet name filter
maxAreaNoMaximum area in m²
minAreaNoMinimum area in m²
dateFromNoStart date (YYYY-MM-DD)
maxPriceNoMaximum price in PLN
minPriceNoMinimum price in PLN
districtsYesComma-separated district names to compare (2-5). E.g. 'Mokotów,Wola,Ursynów'
marketTypeNoMarket type filter
buildingTypeNoBuilding type filter (PKOB classification)
propertyTypeNoProperty type filter (recommended - API requires at least one filter)
unitFunctionNoUnit/apartment function filter
mpzpDesignationNoMPZP zoning designation prefix filter (e.g. 'terenRolniczy', 'budownictwoMieszkanioweJednorodzinne', 'budownictwoMieszkanioweWielorodzinne')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, and the description aligns with this by describing a comparison/analysis operation rather than a mutation. The description adds valuable behavioral context beyond annotations: the 2-5 district limit, the specific statistics returned (median price/m², average area, transaction counts), and the requirement for additional filters. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured: first sentence states purpose, second specifies inputs and constraints, third provides prerequisite guidance, fourth gives a concrete example. Every sentence earns its place with zero wasted words, and critical information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 13 parameters (though schema coverage is excellent) and no output schema, the description provides strong contextual completeness. It explains what the tool does, when to use it, prerequisites, constraints, and includes an example. The main gap is lack of output format details, but given the annotations and clear purpose, it's mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 13 parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it mentions the districts parameter format (comma-separated, 2-5 names) and hints that propertyType is recommended, but doesn't provide significant additional context about parameter interactions or usage patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare real estate statistics across multiple locations side-by-side' with specific metrics (median price/m², average area, transaction counts) and resource scope (2-5 district names). It distinguishes from siblings by focusing on multi-location comparison rather than single-location analysis or search operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use list_locations first to find valid location names' (prerequisite), 'Requires at least one filter besides districts (e.g., propertyType)' (constraint), and includes a concrete example. It clearly indicates when to use this tool versus alternatives like list_locations for discovery.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_overviewA
Read-only
Inspect

Get a comprehensive overview of the Polish real estate transaction database. Returns: total transaction count, date range, breakdown by property type and market type, top locations, price statistics.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds value by specifying the scope ('Polish real estate transaction database') and detailing the return content (e.g., total transaction count, breakdowns, statistics), which goes beyond the annotation. However, it doesn't mention behavioral aspects like rate limits, data freshness, or potential limitations in the overview.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: the first sentence states the purpose, and the second lists the return values. Every sentence earns its place by providing essential information without any fluff or repetition. The structure is clear and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a read-only overview with no parameters) and the presence of annotations (readOnlyHint), the description is largely complete. It explains what the tool does and what it returns, which is sufficient for an agent to understand its use. However, without an output schema, the return values are described but not formally structured, leaving some ambiguity in interpretation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on the tool's function and outputs. This meets the baseline of 4 for zero-parameter tools, as it adds semantic context without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a comprehensive overview of the Polish real estate transaction database' with specific outputs listed. It distinguishes from siblings like 'get_price_statistics' by offering a broader overview rather than just price data. However, it doesn't explicitly differentiate from all siblings (e.g., 'search_transactions' might also provide overview-like data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when this overview tool is preferable to more specific tools like 'get_price_statistics' or 'search_transactions', nor does it indicate any prerequisites or exclusions. The agent must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_price_distributionA
Read-only
Inspect

Get price distribution histogram showing how many transactions fall into each price range. Useful for understanding the overall market price structure in Poland.

ParametersJSON Schema
NameRequiredDescriptionDefault
binsNoNumber of price bins (5-50, default 20)
maxPriceNoMaximum price to include (default 3,000,000 PLN)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, so the agent knows this is a safe read operation. The description adds context about the tool's purpose (price distribution histogram) and geographic scope (Poland), which is useful beyond annotations. However, it doesn't disclose behavioral traits like rate limits, data freshness, or error conditions, which could be relevant for a data-fetching tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and followed by a usage note. Every sentence adds value: the first defines the tool's function, and the second provides contextual utility. There is no wasted text, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple read operation with two parameters), annotations cover safety (readOnlyHint), and schema covers parameters fully, the description is adequate but minimal. It lacks details on output format (no output schema provided), data sources, or limitations, which could help an agent use it more effectively. Completeness is moderate, meeting basic needs but with room for enhancement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters ('bins' and 'maxPrice'), including defaults and constraints. The description doesn't add any parameter semantics beyond what the schema provides, such as explaining how bins affect histogram granularity or why maxPrice is capped. Baseline 3 is appropriate since the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get price distribution histogram showing how many transactions fall into each price range.' It specifies the verb ('Get'), resource ('price distribution histogram'), and outcome ('showing how many transactions fall into each price range'). However, it doesn't explicitly differentiate from sibling tools like 'get_price_statistics' or 'get_market_overview', which might offer related market insights.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context: 'Useful for understanding the overall market price structure in Poland.' This implies when to use it (for market price analysis in Poland) but doesn't specify when not to use it or name alternatives among siblings (e.g., 'get_price_statistics' might offer different statistical insights). The guidance is implied rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_price_statisticsA
Read-only
Inspect

Get price per m² statistics by location for residential apartments in Poland. Note: only covers residential units (lokale mieszkalne). For other property types, use search_transactions. 'Warszawa'/'Kraków'/'Łódź' auto-expand to all sub-districts (Warszawa=19, Kraków=5, Łódź=6). Other names use partial match.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationNoFilter by location name. 'Warszawa'/'Kraków'/'Łódź' auto-expand to all sub-districts. Other names use case-insensitive partial match (e.g. 'Wrocł' matches 'Wrocław'). Omit for all Poland.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the readOnlyHint annotation. It explains location name handling rules: auto-expansion for three major cities and partial matching for others. It also clarifies scope limitations (residential units only). While it doesn't mention rate limits or authentication needs, it provides useful operational details that aren't captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and economical. The first sentence states the core purpose, followed by a critical limitation note, then specific behavioral rules for location handling. Every sentence earns its place with essential information, and the information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with good annotations and complete schema coverage, the description provides excellent context about scope limitations and location handling rules. The main gap is the lack of output schema, so the description doesn't explain what statistics are returned or in what format. However, given the tool's relative simplicity, the description is quite complete for agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents the location parameter thoroughly. The description reinforces the auto-expansion behavior for three cities and partial matching for others, but doesn't add significant semantic value beyond what's in the schema. The baseline score of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get price per m² statistics'), resource ('by location for residential apartments in Poland'), and scope ('only covers residential units'). It distinguishes from sibling 'search_transactions' by specifying property type limitations, making the purpose unambiguous and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'only covers residential units (lokale mieszkalne). For other property types, use search_transactions.' This clearly defines when to use this tool versus an alternative, including both inclusion criteria (residential apartments) and exclusion criteria (other property types).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_locationsA
Read-only
Inspect

List available locations (cities and districts) in the database. Returns administrative districts - for most cities, the district name equals the city name. For Warsaw: returns district names (Mokotów, Śródmieście, Wola, etc.), not 'Warszawa'. For Kraków: returns sub-districts (Kraków-Podgórze, Kraków-Śródmieście, etc.). Use the search parameter to filter by name.

ParametersJSON Schema
NameRequiredDescriptionDefault
searchNoFilter locations by name (case-insensitive partial match, e.g. 'Krak' for Kraków districts)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond this: it explains the return format (city/district names with special handling for Warsaw and Kraków), clarifies that filtering is case-insensitive partial match, and notes that for most cities, district names equal city names. This enhances understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specific details on return behavior and usage of the search parameter. Each sentence adds value: the first defines the tool, the second explains return logic, the third and fourth detail city-specific cases, and the fifth covers filtering. There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional parameter) and annotations covering safety, the description is mostly complete. It explains what is returned and how, with examples. However, without an output schema, it could benefit from more detail on the return structure (e.g., list format, any metadata), though the annotations and context mitigate this gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'search' fully documented in the schema. The description adds minimal extra meaning by reiterating the filter purpose and providing an example ('Krak' for Kraków districts), but does not introduce new semantics beyond what the schema already covers, aligning with the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and resource 'available locations (cities and districts)', making the purpose specific. It distinguishes from siblings by focusing on administrative divisions rather than market data, pricing, or spatial searches, which are covered by tools like get_market_overview or search_by_polygon.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use it (for administrative districts, with special cases for Warsaw and Kraków) and mentions using the search parameter for filtering. However, it does not explicitly state when not to use it or name specific alternatives among siblings, such as search_by_area for broader spatial queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_areaA
Read-only
Inspect

Search real estate transactions within a geographic radius. Provide latitude/longitude coordinates and a radius in km. Example: find apartment sales within 2km of Warsaw's Palace of Culture (lat 52.2317, lng 21.0060). Area filters (minArea/maxArea) work for all propertyType values via COALESCE(usable_area_m2, parcel_area).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results (1-50, default 20)
dateToNoEnd date (YYYY-MM-DD)
maxAreaNoMaximum area in m²
minAreaNoMinimum area in m² (usable_area_m2 for units, parcel_area for land)
dateFromNoStart date (YYYY-MM-DD)
latitudeYesLatitude (Poland range: 49-55)
maxPriceNoMaximum price in PLN
minPriceNoMinimum price in PLN
radiusKmNoSearch radius in kilometers (0.1-50, default 2)
longitudeYesLongitude (Poland range: 14-25)
marketTypeNoMarket type filter
buildingTypeNoBuilding type filter (PKOB classification)
propertyTypeNoProperty type filter
unitFunctionNoUnit/apartment function filter
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, and the description doesn't contradict this. The description adds valuable behavioral context beyond annotations: it explains how area filters work across different property types via COALESCE logic, which is important operational detail not captured in the schema or annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three focused sentences: purpose statement, parameter guidance with example, and important behavioral detail about area filters. Every sentence adds value with zero wasted words, and it's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with excellent schema coverage (100%) but no output schema, the description provides good context about the search behavior and area filtering logic. However, it doesn't describe the return format or result structure, which would be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds some value by emphasizing the required latitude/longitude coordinates and radius parameters, and explaining the area filter behavior, but doesn't provide significant additional parameter semantics beyond what's already well-documented in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search real estate transactions within a geographic radius') and resource ('real estate transactions'), distinguishing it from siblings like search_by_polygon or search_parcels by emphasizing the radius-based geographic search approach.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides an example that implies usage context ('find apartment sales within 2km of Warsaw's Palace of Culture'), but doesn't explicitly state when to use this tool versus alternatives like search_by_polygon or search_transactions. No explicit exclusions or comparisons to sibling tools are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_polygonA
Read-only
Inspect

Search real estate transactions within a geographic polygon. Provide a GeoJSON Polygon geometry to search within a custom area. Returns transactions found inside the polygon with coordinates. Use for precise area searches (neighborhoods, streets, custom regions). Coordinates are [longitude, latitude]. First and last point must be identical. Example: {"type":"Polygon","coordinates":[[[21.0,52.2],[21.01,52.2],[21.01,52.21],[21.0,52.21],[21.0,52.2]]]}

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-5000, default 100). MCP displays up to 50 transactions.
dateToNoEnd date (YYYY-MM-DD)
streetNoStreet name filter (partial match)
maxAreaNoMaximum area in m²
minAreaNoMinimum area in m²
polygonYesGeoJSON Polygon geometry. Coordinates: [longitude, latitude] pairs. Max 500 vertices.
dateFromNoStart date (YYYY-MM-DD)
districtNoDistrict name filter
maxPriceNoMaximum price in PLN
minPriceNoMinimum price in PLN
marketTypeNoMarket type filter
buildingTypeNoBuilding type filter (PKOB classification)
propertyTypeNoProperty type filter
unitFunctionNoUnit/apartment function filter
mpzpDesignationNoMPZP zoning designation filter (exact match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the readOnlyHint annotation. It specifies that coordinates must be [longitude, latitude] pairs, that first and last points must be identical (closing the polygon), and provides a concrete GeoJSON example. While the annotation covers safety (read-only), the description adds implementation details about polygon structure that aren't in the schema descriptions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise. It starts with the core purpose, explains the key parameter, states what it returns, provides usage context, gives coordinate format, and includes a complete example. Every sentence earns its place with no wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 15 parameters, no output schema, and only a readOnlyHint annotation, the description does well but has gaps. It thoroughly explains the polygon parameter but doesn't address how results are returned (format, pagination, or the 50-transaction display limit mentioned in the schema). The example helps, but more behavioral context about the search operation would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions the polygon parameter's coordinate format and closure requirement, which adds some semantic value beyond the schema's technical description. However, it doesn't provide additional context for the other 14 parameters, so it doesn't significantly exceed the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for real estate transactions within a geographic polygon, specifying both the action (search) and resource (real estate transactions). It distinguishes from siblings by emphasizing 'precise area searches' and custom polygon geometry, unlike tools like 'search_by_area' which likely use simpler bounding boxes or predefined areas.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('for precise area searches (neighborhoods, streets, custom regions)'), but doesn't explicitly mention when NOT to use it or name specific alternatives. It implies this is for custom polygon searches versus other search methods, but doesn't directly compare to siblings like 'search_transactions' or 'search_by_area'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_parcelsA
Read-only
Inspect

Search for land parcels by parcel ID prefix (autocomplete). Returns matching parcels with their district, area, and GPS coordinates. Useful for finding exact parcel IDs, then searching transactions nearby. Example: search for parcels starting with '146518_8.01'.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesParcel ID prefix to search for (min 3 chars). E.g. '146518_8.01'
limitNoMax results (1-10, default 10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations declare readOnlyHint=true, indicating a safe read operation. The description adds useful context beyond this by specifying the autocomplete behavior and the return data structure (district, area, GPS coordinates). It does not mention rate limits, authentication needs, or other behavioral traits, but with annotations covering safety, a 3 is appropriate for the added value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by return details, usage context, and an example—all in four concise sentences. Each sentence adds value without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, read-only), 100% schema coverage, and no output schema, the description is mostly complete. It covers purpose, behavior, and usage but lacks details on error handling or exact output format. With annotations providing safety context, it is sufficient but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('q' and 'limit') well-documented in the schema. The description adds minimal semantic value beyond the schema, such as implying the autocomplete functionality for 'q' and providing an example, but does not explain parameter interactions or advanced usage. Baseline 3 is correct as the schema handles most documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search for land parcels'), resource ('land parcels'), and method ('by parcel ID prefix (autocomplete)'). It distinguishes from siblings like 'search_by_area' and 'search_transactions' by focusing on parcel ID prefix matching rather than geographic or transaction-based searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Useful for finding exact parcel IDs, then searching transactions nearby') and includes an example. However, it does not explicitly state when not to use it or name specific alternatives among the sibling tools, such as 'search_transactions' for related searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_transactionsA
Read-only
Inspect

Search Polish real estate transactions from the national RCN registry (7M+ records). Returns transaction details: address, date, price, area, price/m², property type. Use list_locations first to find valid location names. Example: search for apartments in Mokotów sold in 2024 above 500,000 PLN.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (default: 1)
sortNoSort by field (default: date)date
limitNoNumber of results (1-50, default 10)
orderNoSort order (default: desc)desc
dateToNoEnd date (YYYY-MM-DD)
streetNoStreet name filter (partial match, e.g. 'Puławska', 'Trakt Lubelski')
maxAreaNoMaximum area in m²
minAreaNoMinimum area in m²
dateFromNoStart date (YYYY-MM-DD)
locationNoLocation name - city (e.g. 'Warszawa', 'Kraków', 'Gdańsk') or district (e.g. 'Mokotów', 'Kraków-Podgórze'). 'Warszawa', 'Kraków', 'Łódź' auto-expand to all sub-districts. Use list_locations to find valid names.
maxPriceNoMaximum price in PLN
minPriceNoMinimum price in PLN
parcelIdNoExact parcel ID as returned in search results (e.g. '146518_8.0108.27'). Must match exactly - copy from a previous search result's parcel_id field.
marketTypeNoMarket type: primary (developer) or secondary (resale)
buildingTypeNoBuilding type filter (PKOB classification)
propertyTypeNoProperty type filter
unitFunctionNoUnit/apartment function filter
buildingNumberNoBuilding/house number (e.g. '251C', '12A'). Requires location or street to be set.
mpzpDesignationNoMPZP zoning designation filter (exact match, e.g. 'budownictwoMieszkanioweWielorodzinne', 'terenObiektowProdukcyjnychSkladowIMagazynow')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, but the description adds valuable context about the data source (Polish RCN registry with 7M+ records), the specific fields returned (address, date, price, area, price/m², property type), and the example clarifies practical usage. It doesn't contradict annotations and adds meaningful behavioral information beyond the read-only hint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: first establishes purpose and data scope, second lists return fields, third provides usage guidance with concrete example. Every sentence adds value with zero wasted words. Front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 19 parameters, 100% schema coverage, and readOnlyHint annotation, the description provides excellent context about the data source, return fields, and prerequisite tool usage. The main gap is lack of output schema, but the description partially compensates by listing return fields. Could benefit from mentioning pagination behavior given the page/limit parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 19 parameters thoroughly. The description doesn't add parameter-specific semantics beyond what's in the schema, but it provides context about the data source and example usage that helps understand the parameter ecosystem. Baseline 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches Polish real estate transactions from the RCN registry (7M+ records) and returns specific transaction details. It distinguishes from siblings by specifying it searches transactions rather than comparing locations, getting market overviews, or searching by area/polygon.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs to 'Use list_locations first to find valid location names' and provides a concrete example showing when to use this tool: 'search for apartments in Mokotów sold in 2024 above 500,000 PLN.' This gives clear context for when this tool is appropriate versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.