minhamorada.pt
Server Details
Search Portuguese real estate — apartments and houses for sale or rent across all 18 districts of Portugal. Aggregates ~10,500 properties from Imovirtual, Idealista, and RE/MAX, updated weekly.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: search_properties discovers listings, get_property retrieves specific details by ID, calculate_commute handles transportation analysis, and market_stats provides aggregate data. No functional overlap exists between the four tools.
Three tools follow verb_noun (calculate_commute, get_property, search_properties), but market_stats breaks the pattern by omitting a verb prefix like 'get_' or 'retrieve_'. Otherwise, all use lowercase with underscores consistently.
Four tools is appropriate for this focused real estate discovery domain, covering search, retrieval, analytics, and commute calculation. While functional, it sits at the lower end of the ideal range; additional utilities like district listing or filter metadata could strengthen the surface.
Provides solid read-only coverage for property discovery workflows (search → details → commute analysis) plus market context. Missing write operations (favorites, contact agent) and helper utilities (list districts/typologies), but the core CRUD for browsing is well-covered.
Available Tools
4 toolscalculate_commuteAInspect
Calculate estimated commute time between two points or between a property and a location. Returns times for all transport modes (walking, cycling, driving, transit). Based on straight-line distance with typical speeds.
| Name | Required | Description | Default |
|---|---|---|---|
| to_lat | Yes | Destination latitude | |
| to_lng | Yes | Destination longitude | |
| from_lat | No | Starting latitude (used if property_id not provided) | |
| from_lng | No | Starting longitude (used if property_id not provided) | |
| to_label | No | Human-readable name for the destination (e.g., 'Lisbon Airport') | |
| property_id | No | Property ID to calculate commute from (alternative to from_lat/from_lng) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and succeeds by specifying that it 'Returns times for all transport modes' (output structure) and crucially disclosing the estimation methodology ('Based on straight-line distance with typical speeds'). This warns users about accuracy limitations, though it could further clarify traffic handling or failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. The first establishes purpose, the second covers output behavior, and the third discloses estimation methodology. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description adequately covers what the tool returns (transport modes) and the calculation caveat. Given the 100% schema coverage and the conditional parameter logic (property_id vs. coordinates), the description provides sufficient context, though explicitly stating the mutual exclusivity of start-point parameters would strengthen it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by mapping the high-level concepts ('two points' vs. 'property') to the parameter groups (coordinates vs. property_id), helping users understand the mutual exclusivity of the starting point inputs, though it does not elaborate on formats or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool calculates 'estimated commute time' using specific verbs and identifies the two main usage patterns (point-to-point vs. property-to-location). It implicitly distinguishes from siblings like get_property and search_properties by focusing on temporal/distance calculations rather than property metadata or search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description outlines the two input alternatives ('between two points or between a property') which guides parameter selection, but lacks explicit when-not-to-use guidance or comparisons to sibling tools that might also involve location data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_propertyAInspect
Get full details for a specific property by ID. Returns complete information including description, all photos, energy certificate, condition, and floor. Use after search_properties to show detailed info about a property.
| Name | Required | Description | Default |
|---|---|---|---|
| near_lat | No | Latitude to calculate commute times from | |
| near_lng | No | Longitude to calculate commute times from | |
| property_id | Yes | The property ID (e.g., 'imovirtual_19072237', 'idealista_34861988') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return payload contents ('description, all photos, energy certificate...') which is valuable behavioral context. However, lacks operational details: does not explain that optional lat/lng trigger commute calculations, omits error handling (e.g., invalid ID), caching, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences with zero redundancy: (1) core action, (2) return payload specifics, (3) usage workflow. Front-loaded structure puts the essential verb ('Get') and resource ('property') immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong coverage given constraints: no output schema exists, so description appropriately lists return fields. Acknowledges sibling relationship (search_properties → get_property). Minor gap: does not mention commute calculation behavior implied by optional coordinates.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description references 'by ID' reinforcing property_id purpose, but adds no semantic value for near_lat/near_lng beyond what schema already documents ('Latitude to calculate commute times from'). No penalties required since schema is comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Get full details for a specific property by ID' provides exact verb, resource, scope, and identifier. Naturally distinguishes from search_properties (search vs. retrieve) and calculate_commute (different function entirely).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow guidance: 'Use after search_properties to show detailed info' establishes clear sequencing with sibling tool. Lacks explicit negative guidance (e.g., 'do not use for searching') but provides strong positive context for when to invoke.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_statsAInspect
Get aggregate market statistics for the Portuguese real estate market. Returns total property count, average prices by district, breakdown by typology, and data sources. Useful for questions like 'What's the average rent in Porto?'
| Name | Required | Description | Default |
|---|---|---|---|
| district | No | Filter stats to a specific district | |
| price_type | No | Filter to sale or rent only |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return structure ('total property count, average prices by district...') which compensates for missing output schema, but omits operational details like data freshness, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each earning its place: action+scope, return values, and example usage. Front-loaded with the core verb, no redundancy, appropriate length for tool complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 optional parameters with full schema documentation, the description adequately covers the tool's function and return values. Minor gap: does not explicitly note that both parameters are optional (allParams optional), which would help the agent know it can call this with no arguments for nationwide stats.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions ('Filter stats to a specific district', 'Filter to sale or rent only'), establishing baseline 3. The description adds an example question suggesting usage but doesn't add parameter syntax, formatting rules, or constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb-resource combination ('Get aggregate market statistics') with specific geographic scope ('Portuguese real estate market'). Implicitly distinguishes from siblings: calculate_commute (travel times), get_property (specific listing), and search_properties (individual properties) by focusing on aggregate data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a concrete example use case ('What's the average rent in Porto?') which hints at when to use it, but lacks explicit guidance contrasting it with siblings or stating when NOT to use it (e.g., for individual property details).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_propertiesAInspect
Search for properties (apartments, houses) for sale or rent in Portugal. Returns matching properties with prices, locations, and links. Use this to help users find real estate in specific Portuguese districts, with specific budgets, or near specific locations.
| Name | Required | Description | Default |
|---|---|---|---|
| dir | No | Sort direction (default: desc) | |
| page | No | Page number (default: 1) | |
| sort | No | Sort field (default: scraped_at = newest) | |
| limit | No | Results per page, max 20 (default: 10) | |
| area_min | No | Minimum area in square meters | |
| district | No | Portuguese district name (e.g., 'Lisboa', 'Porto', 'Braga', 'Faro') | |
| near_lat | No | Latitude for proximity search | |
| near_lng | No | Longitude for proximity search | |
| typology | No | Comma-separated apartment types: T0, T1, T2, T3, T4, T5+ | |
| condition | No | Comma-separated: new, renovated, used, to_renovate | |
| price_max | No | Maximum price in EUR. Rent: 400-2000. Sale: 80000-500000. | |
| price_min | No | Minimum price in EUR | |
| has_garden | No | Filter for properties with garden/outdoor space | |
| price_type | No | Whether to search for properties to buy ('sale') or rent ('rent') | |
| has_parking | No | Filter for properties with parking | |
| has_elevator | No | Filter for properties with elevator | |
| municipality | No | Municipality/concelho within a district | |
| near_transport | No | Transport mode for commute (default: transit) | |
| near_max_minutes | No | Maximum commute time in minutes (default: 30) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return values ('prices, locations, and links') but omits pagination behavior, result limits, or safety classification (though 'search' implies read-only).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: purpose statement, return value disclosure, and usage guidance. No redundancy, front-loaded with core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 19 parameters and no output schema, description compensates by describing return format. Could improve by noting all parameters are optional (required: 0) or result pagination behavior, but adequately covers the complex search domain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds valuable semantic bridging by mapping user concepts ('Portuguese districts', 'budgets', 'near specific locations') to parameter groups, helping agents understand which parameters to populate for user requests.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Search' with resource 'properties (apartments, houses)' and scope 'Portugal'. Implicitly distinguishes from sibling get_property (single fetch) by emphasizing search/filter functionality for finding real estate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides usage context ('Use this to help users find real estate...') mapping to specific parameter categories (districts, budgets, locations), but lacks explicit when-not-to-use guidance or comparison to siblings like get_property or market_stats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!