Skip to main content
Glama

Cenogram - Polish Real Estate Data

Ownership verified

Server Details

7M+ real estate transactions from Poland's RCN registry. Search, compare, and analyze prices.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 9 of 9 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation4/5

Tools are well-differentiated by function (geographic searches vs statistical aggregations vs discovery), though the four statistical tools (get_price_statistics, compare_locations, get_price_distribution, get_market_overview) return similar metrics at different granularities and could momentarily confuse. The three search variants (by_area, by_polygon, transactions) are clearly separated by input type (coordinates vs filters).

Naming Consistency4/5

Follows a logical verb_noun pattern with sensible variations: 'search_by_' prefix consistently denotes geometric coordinate-based queries (area, polygon), while 'search_' denotes filtered record queries (parcels, transactions). Minor deviation with preposition usage, but semantically coherent.

Tool Count5/5

Nine tools provide comprehensive coverage for querying real estate registry data without redundancy. The set includes location discovery, three geographic search modes, transaction search, and four analytical aggregations—appropriate scope for a specialized Polish real estate data server.

Completeness4/5

Strong coverage for read-only access to the RCN registry, including parcel lookup, coordinate-based searches, and market statistics. Minor gaps include lack of time-series/trend analysis tools and get_price_statistics being restricted to residential apartments (forcing use of search_transactions for other property types).

Available Tools

9 tools
compare_locationsA
Read-only
Inspect

Compare real estate statistics across multiple locations side-by-side. Provide 2-5 district names to compare median price/m², average area, and transaction counts. Use list_locations first to find valid location names. Requires at least one filter besides districts (e.g., propertyType). Example: compare Mokotów, Wola, Ursynów for apartments.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateToNoEnd date (YYYY-MM-DD)
streetNoStreet name filter
maxAreaNoMaximum area in m²
minAreaNoMinimum area in m²
dateFromNoStart date (YYYY-MM-DD)
maxPriceNoMaximum price in PLN
minPriceNoMinimum price in PLN
districtsYesComma-separated district names to compare (2-5). E.g. 'Mokotów,Wola,Ursynów'
marketTypeNoMarket type filter
buildingTypeNoBuilding type filter (PKOB classification)
propertyTypeNoProperty type filter (recommended - API requires at least one filter)
unitFunctionNoUnit/apartment function filter
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true, the description adds valuable behavioral context: it specifies the comparison yields three specific metrics (median price/m², average area, transaction counts) and discloses the API constraint requiring additional filters beyond districts. This goes beyond basic safety profiling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured across five sentences: purpose statement, input specification, workflow prerequisite, API constraint, and concrete example. Every sentence earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 10-parameter tool with no output schema, the description adequately covers input requirements, workflow dependencies, and previews the returned data dimensions. It could marginally improve by specifying the output structure format, but the metric enumeration provides sufficient context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema description coverage (baseline 3), the description adds critical semantic constraints not in the schema: the '2-5' district count limit (schema only enforces minLength: 1) and a concrete formatting example ('Mokotów,Wola,Ursynów') that clarifies the comma-separated syntax.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Compare') and resource ('real estate statistics'), clarifies the comparison is 'side-by-side' across 'multiple locations', and specifies the exact metrics returned (median price/m², average area, transaction counts). This effectively distinguishes it from sibling tools like get_price_statistics or search_transactions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisites ('Use list_locations first to find valid location names'), input constraints ('Provide 2-5 district names'), and API requirements ('Requires at least one filter besides districts'). However, it does not explicitly contrast when to use this versus siblings like get_price_statistics for single-location analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_overviewA
Read-only
Inspect

Get a comprehensive overview of the Polish real estate transaction database. Returns: total transaction count, date range, breakdown by property type and market type, top locations, price statistics.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the safety profile is covered. The description adds valuable behavioral context by enumerating the specific return values (total count, date range, breakdowns) since no output schema exists, but omits other traits like pagination or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of exactly two efficient sentences with zero redundancy. The first sentence front-loads the core purpose, and the second efficiently lists return values using a colon-separated format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and no output schema, the description adequately compensates by detailing the return payload. It successfully communicates what data the agent will receive, though it could briefly clarify the geographic/temporal scope of the overview.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per the rubric, 0-parameter tools receive a baseline score of 4, as there are no parameter semantics to describe.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and specific resource ('Polish real estate transaction database'), with 'comprehensive overview' distinguishing it from sibling search and comparison tools. The return value list further clarifies the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus siblings like 'get_price_statistics' or 'search_transactions'. It lists what it returns but doesn't advise users to use it for high-level market analysis before drilling down.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_price_distributionA
Read-only
Inspect

Get price distribution histogram showing how many transactions fall into each price range. Useful for understanding the overall market price structure in Poland.

ParametersJSON Schema
NameRequiredDescriptionDefault
binsNoNumber of price bins (5-50, default 20)
maxPriceNoMaximum price to include (default 3,000,000 PLN)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, confirming safe read access. The description adds valuable behavioral context not found in structured fields: the geographic scope ('Poland') and the aggregation method (binning transactions into price ranges). It does not describe output format details or pagination, but 'histogram' implies the return structure adequately given the tool's simplicity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. The first sentence front-loads the core action and output format; the second provides usage context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 2-parameter schema (100% coverage), readOnly annotation, and lack of output schema, the description provides sufficient context by specifying the geographic market (Poland) and output type (histogram). It appropriately compensates for missing output schema by indicating the histogram nature of returned data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description does not explicitly discuss the bins or maxPrice parameters, but the mention of 'histogram' and 'price range' implicitly contextualizes the binning concept without duplicating schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a 'price distribution histogram' with specific aggregation ('how many transactions fall into each price range'), which distinguishes it from sibling tools like get_price_statistics (likely summary stats) and search_transactions (individual records). However, it does not explicitly contrast with these siblings by name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides positive usage context ('Useful for understanding the overall market price structure in Poland') indicating when to use the tool. However, it lacks negative guidance (when not to use) or explicit references to alternatives like get_price_statistics for non-distribution analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_price_statisticsA
Read-only
Inspect

Get price per m² statistics by location for residential apartments in Poland. Note: only covers residential units (lokale mieszkalne). For other property types, use search_transactions. 'Warszawa' and 'Kraków' match all sub-districts via partial-match filtering.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationNoFilter by location name (case-insensitive partial match). E.g. 'Kraków' matches 'Kraków-Podgórze', 'Kraków-Śródmieście', etc. Omit for all Poland.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true. Description adds crucial behavioral context: geographic limitation (Poland only), property type restriction (residential only), and the Warsaw-specific naming quirk where city-level queries fail. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. Purpose front-loaded in sentence 1. Sentence 2 scopes usage and names alternative. Sentence 3 provides specific parameter guidance for Warsaw. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool. Despite lacking output schema, description sufficiently indicates return value type (price per m² statistics). Missing minor details like specific statistical measures returned (mean, median, etc.), but complete enough for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with good parameter description. Description adds valuable domain-specific usage guidance: the Warsaw district naming convention (Mokotów/Wola vs Warszawa) and reinforces the partial matching behavior with concrete examples, preventing common query errors.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'price per m² statistics' + scope 'residential apartments in Poland'. Explicitly distinguishes from sibling 'search_transactions' by stating it only covers residential units vs. other property types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names alternative tool 'search_transactions' for other property types. Provides critical usage constraint for Warsaw requiring district names (Mokotów, Wola) rather than 'Warszawa'. Could explicitly state when to prefer this over 'get_price_distribution' or 'compare_locations'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_locationsA
Read-only
Inspect

List available locations (cities and districts) in the database. Returns administrative districts - for most cities, the district name equals the city name. For Warsaw: returns district names (Mokotów, Śródmieście, Wola, etc.), not 'Warszawa'. For Kraków: returns sub-districts (Kraków-Podgórze, Kraków-Śródmieście, etc.). Use the search parameter to filter by name.

ParametersJSON Schema
NameRequiredDescriptionDefault
searchNoFilter locations by name (case-insensitive partial match, e.g. 'Krak' for Kraków districts)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true, the description adds crucial behavioral context about return granularity—specifically that Warsaw returns districts (Mokotów, Wola) rather than 'Warszawa', and Kraków returns sub-districts—helping the agent predict return structure without an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five sentences, all earning their place: core purpose, general behavior rule, two specific city examples that prevent confusion, and usage guidance. Front-loaded with the essential verb-resource pair.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter tool without output schema, the description compensates well with concrete return value examples (district names). Could be improved by explicitly stating the return type (array of location objects), but the behavioral examples provide sufficient predictability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the 'search' parameter, the baseline is met. The description mentions using the parameter 'to filter by name' but adds minimal semantic value beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'List' and clear resource 'available locations (cities and districts)', distinguishing it from sibling search tools (search_transactions, search_parcels) that return properties rather than administrative locations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear context about what the tool returns (administrative districts vs. city names) with specific city examples, helping the agent understand when to use it for location discovery. However, it lacks explicit guidance on when NOT to use it or direct comparison to sibling tools like compare_locations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_areaA
Read-only
Inspect

Search real estate transactions within a geographic radius. Provide latitude/longitude coordinates and a radius in km. Example: find apartment sales within 2km of Warsaw's Palace of Culture (lat 52.2317, lng 21.0060).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results (1-50, default 20)
dateToNoEnd date (YYYY-MM-DD)
dateFromNoStart date (YYYY-MM-DD)
latitudeYesLatitude (Poland range: 49-55)
maxPriceNoMaximum price in PLN
minPriceNoMinimum price in PLN
radiusKmNoSearch radius in kilometers (0.1-50, default 2)
longitudeYesLongitude (Poland range: 14-25)
marketTypeNoMarket type filter
buildingTypeNoBuilding type filter (PKOB classification)
propertyTypeNoProperty type filter
unitFunctionNoUnit/apartment function filter
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations declare readOnlyHint=true, indicating a safe read operation. The description adds valuable context about the geographic scope (radius) and provides a concrete coordinate example. However, it fails to mention the Poland-specific geographic constraints visible in the schema (lat 49-55, lng 14-25) or describe what data structure is returned, which would be helpful given the lack of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three highly efficient sentences: purpose declaration, input requirements, and concrete example. Every sentence earns its place without redundancy. The information is front-loaded with the core action, making it easy for an agent to quickly assess relevance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the schema's richness (10 parameters including date ranges, price filters, property types, and market types), the description is minimally adequate but has clear gaps. It focuses exclusively on the geographic search mechanism while omitting any mention of the sophisticated filtering capabilities available, which limits an agent's understanding of the tool's full utility without examining the schema directly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all 10 parameters including their types, ranges, and purposes. The description mentions the three core geographic parameters (latitude, longitude, radius) and provides an example, but does not add significant semantic depth beyond what the schema already provides, which warrants the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search real estate transactions') and the unique scope ('within a geographic radius'). It effectively distinguishes itself from sibling tools like 'search_by_polygon' (which implies polygon geometry) and 'search_parcels' (different data type) through the explicit mention of radius-based searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides an example usage scenario (Warsaw's Palace of Culture) that implicitly demonstrates when to use the tool—when you have a center point and distance. However, it lacks explicit guidance on when to choose this over 'search_by_polygon' (e.g., 'use this for circular areas, use search_by_polygon for irregular boundaries') or other search siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_polygonA
Read-only
Inspect

Search real estate transactions within a geographic polygon. Provide a GeoJSON Polygon geometry to search within a custom area. Returns transactions found inside the polygon with coordinates. Use for precise area searches (neighborhoods, streets, custom regions). Coordinates are [longitude, latitude]. First and last point must be identical. Example: {"type":"Polygon","coordinates":[[[21.0,52.2],[21.01,52.2],[21.01,52.21],[21.0,52.21],[21.0,52.2]]]}

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-5000, default 100). MCP displays up to 50 transactions.
dateToNoEnd date (YYYY-MM-DD)
streetNoStreet name filter (partial match)
maxAreaNoMaximum area in m²
minAreaNoMinimum area in m²
polygonYesGeoJSON Polygon geometry. Coordinates: [longitude, latitude] pairs. Max 500 vertices.
dateFromNoStart date (YYYY-MM-DD)
districtNoDistrict name filter
maxPriceNoMaximum price in PLN
minPriceNoMinimum price in PLN
marketTypeNoMarket type filter
buildingTypeNoBuilding type filter (PKOB classification)
propertyTypeNoProperty type filter
unitFunctionNoUnit/apartment function filter
mpzpDesignationNoMPZP zoning designation filter (exact match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond readOnlyHint=true annotation, description adds crucial behavioral details: return value description ('Returns transactions found inside...'), coordinate ordering constraint ('[longitude, latitude]'), polygon closure requirement ('First and last point must be identical'), and a concrete JSON example.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Seven sentences covering purpose, input mechanism, output, usage context, technical constraints, and example. No redundancy; every sentence provides distinct value. Well front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex 12-parameter spatial tool with nested objects and no output schema, description adequately covers the critical complexity (GeoJSON format requirements, coordinate system) and describes return values. Minor gap: doesn't mention result limits or error behavior for invalid geometries.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), description adds essential semantic context for the complex nested polygon parameter: coordinate order, closure constraint, and a complete working example that clarifies the expected data structure beyond the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Search' and clear resource 'real estate transactions within a geographic polygon.' The GeoJSON specificity distinguishes it from sibling tools like search_by_area or search_transactions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear positive guidance: 'Use for precise area searches (neighborhoods, streets, custom regions).' However, lacks explicit contrast with siblings (e.g., when to use search_by_area vs this polygon search).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_parcelsA
Read-only
Inspect

Search for land parcels by parcel ID prefix (autocomplete). Returns matching parcels with their district, area, and GPS coordinates. Useful for finding exact parcel IDs, then searching transactions nearby. Example: search for parcels starting with '146518_8.01'.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesParcel ID prefix to search for (min 3 chars). E.g. '146518_8.01'
limitNoMax results (1-10, default 10)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses return payload structure ('district, area, and GPS coordinates') which compensates for missing output schema. Clarifies autocomplete behavior pattern. Annotations declare readOnlyHint=true, and description confirms safety through 'Search' semantics without contradicting the read-only nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with logical progression: purpose → return values → usage context → concrete example. Zero redundancy. Front-loaded with action and resource, with every sentence delivering distinct value (mechanism, payload, workflow, syntax).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple 2-parameter schema and lack of output schema, description achieves completeness by detailing return fields (district, area, GPS) and explaining the autocomplete interaction pattern. Sufficient for agent to invoke confidently without surprises.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete descriptions for both 'q' (prefix) and 'limit'. Description reinforces parameter usage through concrete example ('146518_8.01') matching schema pattern, but does not add substantial semantic layer beyond well-documented schema fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Search', identifies resource 'land parcels', and specifies unique mechanism 'parcel ID prefix (autocomplete)'. Clearly distinguishes from siblings like search_by_area and search_transactions through the prefix/autocomplete focus and explicit mention of workflow transitioning to transaction searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance ('Useful for finding exact parcel IDs, then searching transactions nearby') that clarifies the two-step process with sibling search_transactions. Lacks explicit 'when not to use' guidance comparing geographic search alternatives (search_by_area, search_by_polygon), though the prefix focus provides implicit differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_transactionsA
Read-only
Inspect

Search Polish real estate transactions from the national RCN registry (7M+ records). Returns transaction details: address, date, price, area, price/m², property type. Use list_locations first to find valid location names. Example: search for apartments in Mokotów sold in 2024 above 500,000 PLN.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (default: 1)
sortNoSort by field (default: date)date
limitNoNumber of results (1-50, default 10)
orderNoSort order (default: desc)desc
dateToNoEnd date (YYYY-MM-DD)
streetNoStreet name filter (partial match, e.g. 'Puławska', 'Trakt Lubelski')
maxAreaNoMaximum area in m²
minAreaNoMinimum area in m²
dateFromNoStart date (YYYY-MM-DD)
locationNoLocation name - city (e.g. 'Warszawa', 'Kraków', 'Gdańsk') or district (e.g. 'Mokotów', 'Kraków-Podgórze'). 'Warszawa', 'Kraków', 'Łódź' auto-expand to all sub-districts. Use list_locations to find valid names.
maxPriceNoMaximum price in PLN
minPriceNoMinimum price in PLN
parcelIdNoExact parcel ID as returned in search results (e.g. '146518_8.0108.27'). Must match exactly - copy from a previous search result's parcel_id field.
marketTypeNoMarket type: primary (developer) or secondary (resale)
buildingTypeNoBuilding type filter (PKOB classification)
propertyTypeNoProperty type filter
unitFunctionNoUnit/apartment function filter
buildingNumberNoBuilding/house number (e.g. '251C', '12A'). Requires location or street to be set.
mpzpDesignationNoMPZP zoning designation filter (exact match, e.g. 'budownictwoMieszkanioweWielorodzinne', 'terenObiektowProdukcyjnychSkladowIMagazynow')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true. Description adds valuable context: data source (RCN registry), dataset scale (7M+ records), and specific return fields (address, date, price, area, price/m², property type). Could improve by mentioning pagination behavior or result format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, zero waste: (1) purpose and source, (2) return values, (3) prerequisite instruction, (4) example. Information is front-loaded and efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 16-parameter search tool with no output schema, the description adequately compensates by listing returned fields. Missing explicit details on pagination behavior or response structure, but the example and field list provide sufficient context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description adds minimal parameter semantics beyond schema, though it reinforces the location parameter workflow ('Use list_locations first') and illustrates filter combinations through the example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states the tool searches 'Polish real estate transactions from the national RCN registry' with specific scope (7M+ records). It clearly distinguishes from siblings by focusing on transaction search vs. comparison (compare_locations), statistics (get_price_statistics), or geographic searches (search_by_polygon, search_by_area).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisite workflow: 'Use list_locations first to find valid location names.' Includes concrete usage example. Does not explicitly contrast with alternative search methods (search_by_area vs. this) or state when NOT to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources