Skip to main content
Glama

minhamorada.pt

Ownership verified

Server Details

Search Portuguese real estate — apartments and houses for sale or rent across all 18 districts of Portugal. Aggregates ~10,500 properties from Imovirtual, Idealista, and RE/MAX, updated weekly.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: search_properties discovers listings, get_property retrieves specific details by ID, calculate_commute handles transportation analysis, and market_stats provides aggregate data. No functional overlap exists between the four tools.

Naming Consistency4/5

Three tools follow verb_noun (calculate_commute, get_property, search_properties), but market_stats breaks the pattern by omitting a verb prefix like 'get_' or 'retrieve_'. Otherwise, all use lowercase with underscores consistently.

Tool Count4/5

Four tools is appropriate for this focused real estate discovery domain, covering search, retrieval, analytics, and commute calculation. While functional, it sits at the lower end of the ideal range; additional utilities like district listing or filter metadata could strengthen the surface.

Completeness4/5

Provides solid read-only coverage for property discovery workflows (search → details → commute analysis) plus market context. Missing write operations (favorites, contact agent) and helper utilities (list districts/typologies), but the core CRUD for browsing is well-covered.

Available Tools

4 tools
calculate_commuteAInspect

Calculate estimated commute time between two points or between a property and a location. Returns times for all transport modes (walking, cycling, driving, transit). Based on straight-line distance with typical speeds.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_latYesDestination latitude
to_lngYesDestination longitude
from_latNoStarting latitude (used if property_id not provided)
from_lngNoStarting longitude (used if property_id not provided)
to_labelNoHuman-readable name for the destination (e.g., 'Lisbon Airport')
property_idNoProperty ID to calculate commute from (alternative to from_lat/from_lng)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden and succeeds by specifying that it 'Returns times for all transport modes' (output structure) and crucially disclosing the estimation methodology ('Based on straight-line distance with typical speeds'). This warns users about accuracy limitations, though it could further clarify traffic handling or failure modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. The first establishes purpose, the second covers output behavior, and the third discloses estimation methodology. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately covers what the tool returns (transport modes) and the calculation caveat. Given the 100% schema coverage and the conditional parameter logic (property_id vs. coordinates), the description provides sufficient context, though explicitly stating the mutual exclusivity of start-point parameters would strengthen it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value by mapping the high-level concepts ('two points' vs. 'property') to the parameter groups (coordinates vs. property_id), helping users understand the mutual exclusivity of the starting point inputs, though it does not elaborate on formats or validation rules beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates 'estimated commute time' using specific verbs and identifies the two main usage patterns (point-to-point vs. property-to-location). It implicitly distinguishes from siblings like get_property and search_properties by focusing on temporal/distance calculations rather than property metadata or search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description outlines the two input alternatives ('between two points or between a property') which guides parameter selection, but lacks explicit when-not-to-use guidance or comparisons to sibling tools that might also involve location data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_propertyAInspect

Get full details for a specific property by ID. Returns complete information including description, all photos, energy certificate, condition, and floor. Use after search_properties to show detailed info about a property.

ParametersJSON Schema
NameRequiredDescriptionDefault
near_latNoLatitude to calculate commute times from
near_lngNoLongitude to calculate commute times from
property_idYesThe property ID (e.g., 'imovirtual_19072237', 'idealista_34861988')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses return payload contents ('description, all photos, energy certificate...') which is valuable behavioral context. However, lacks operational details: does not explain that optional lat/lng trigger commute calculations, omits error handling (e.g., invalid ID), caching, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero redundancy: (1) core action, (2) return payload specifics, (3) usage workflow. Front-loaded structure puts the essential verb ('Get') and resource ('property') immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong coverage given constraints: no output schema exists, so description appropriately lists return fields. Acknowledges sibling relationship (search_properties → get_property). Minor gap: does not mention commute calculation behavior implied by optional coordinates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description references 'by ID' reinforcing property_id purpose, but adds no semantic value for near_lat/near_lng beyond what schema already documents ('Latitude to calculate commute times from'). No penalties required since schema is comprehensive.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Get full details for a specific property by ID' provides exact verb, resource, scope, and identifier. Naturally distinguishes from search_properties (search vs. retrieve) and calculate_commute (different function entirely).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit workflow guidance: 'Use after search_properties to show detailed info' establishes clear sequencing with sibling tool. Lacks explicit negative guidance (e.g., 'do not use for searching') but provides strong positive context for when to invoke.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

market_statsAInspect

Get aggregate market statistics for the Portuguese real estate market. Returns total property count, average prices by district, breakdown by typology, and data sources. Useful for questions like 'What's the average rent in Porto?'

ParametersJSON Schema
NameRequiredDescriptionDefault
districtNoFilter stats to a specific district
price_typeNoFilter to sale or rent only
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses return structure ('total property count, average prices by district...') which compensates for missing output schema, but omits operational details like data freshness, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each earning its place: action+scope, return values, and example usage. Front-loaded with the core verb, no redundancy, appropriate length for tool complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 optional parameters with full schema documentation, the description adequately covers the tool's function and return values. Minor gap: does not explicitly note that both parameters are optional (allParams optional), which would help the agent know it can call this with no arguments for nationwide stats.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions ('Filter stats to a specific district', 'Filter to sale or rent only'), establishing baseline 3. The description adds an example question suggesting usage but doesn't add parameter syntax, formatting rules, or constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb-resource combination ('Get aggregate market statistics') with specific geographic scope ('Portuguese real estate market'). Implicitly distinguishes from siblings: calculate_commute (travel times), get_property (specific listing), and search_properties (individual properties) by focusing on aggregate data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides a concrete example use case ('What's the average rent in Porto?') which hints at when to use it, but lacks explicit guidance contrasting it with siblings or stating when NOT to use it (e.g., for individual property details).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_propertiesAInspect

Search for properties (apartments, houses) for sale or rent in Portugal. Returns matching properties with prices, locations, and links. Use this to help users find real estate in specific Portuguese districts, with specific budgets, or near specific locations.

ParametersJSON Schema
NameRequiredDescriptionDefault
dirNoSort direction (default: desc)
pageNoPage number (default: 1)
sortNoSort field (default: scraped_at = newest)
limitNoResults per page, max 20 (default: 10)
area_minNoMinimum area in square meters
districtNoPortuguese district name (e.g., 'Lisboa', 'Porto', 'Braga', 'Faro')
near_latNoLatitude for proximity search
near_lngNoLongitude for proximity search
typologyNoComma-separated apartment types: T0, T1, T2, T3, T4, T5+
conditionNoComma-separated: new, renovated, used, to_renovate
price_maxNoMaximum price in EUR. Rent: 400-2000. Sale: 80000-500000.
price_minNoMinimum price in EUR
has_gardenNoFilter for properties with garden/outdoor space
price_typeNoWhether to search for properties to buy ('sale') or rent ('rent')
has_parkingNoFilter for properties with parking
has_elevatorNoFilter for properties with elevator
municipalityNoMunicipality/concelho within a district
near_transportNoTransport mode for commute (default: transit)
near_max_minutesNoMaximum commute time in minutes (default: 30)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses return values ('prices, locations, and links') but omits pagination behavior, result limits, or safety classification (though 'search' implies read-only).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose statement, return value disclosure, and usage guidance. No redundancy, front-loaded with core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 19 parameters and no output schema, description compensates by describing return format. Could improve by noting all parameters are optional (required: 0) or result pagination behavior, but adequately covers the complex search domain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds valuable semantic bridging by mapping user concepts ('Portuguese districts', 'budgets', 'near specific locations') to parameter groups, helping agents understand which parameters to populate for user requests.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'Search' with resource 'properties (apartments, houses)' and scope 'Portugal'. Implicitly distinguishes from sibling get_property (single fetch) by emphasizing search/filter functionality for finding real estate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides usage context ('Use this to help users find real estate...') mapping to specific parameter categories (districts, budgets, locations), but lacks explicit when-not-to-use guidance or comparison to siblings like get_property or market_stats.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources