Skip to main content
Glama

BrokerIA Imoveis

Server Details

Search Brazilian real estate, simulate financing, qualify leads, schedule visits.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
raklapimenta/brokeria-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 5 of 5 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: searching properties, viewing details, comparing properties, finding nearby properties, and scheduling visits. The descriptions explicitly differentiate their functions, such as 'buscar_imoveis' for general searches versus 'imoveis_proximos' for location-based searches, ensuring agents can easily select the correct tool.

Naming Consistency5/5

All tool names follow a consistent 'brokeria_verb_noun' pattern in snake_case, such as 'brokeria_buscar_imoveis' and 'brokeria_detalhes_imovel'. This uniformity makes the tool set predictable and easy to understand, with no deviations in naming conventions.

Tool Count5/5

With 5 tools, the server is well-scoped for a real estate domain, covering key workflows like search, details, comparison, proximity, and scheduling. Each tool serves a unique and necessary function without being excessive or insufficient for the apparent scope.

Completeness4/5

The tool set provides comprehensive coverage for core real estate operations, including search, details, comparison, location-based discovery, and visit scheduling. A minor gap exists in lacking tools for direct property management (e.g., create/update/delete listings), but this is reasonable given the server's focus on public data and user interactions.

Available Tools

5 tools
brokeria_agendar_visitaRequest a property visitAInspect

Registra uma SOLICITACAO de visita a um imovel. Requer id do imovel, nome do visitante, telefone ou email, data preferida e periodo (manha/tarde/noite). A solicitacao e enviada pra imobiliaria responsavel, que entra em contato pra confirmar. Nao confirma visita nem reserva horario por conta propria.

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNoObservacoes opcionais
contactYesPelo menos um meio de contato obrigatorio (phone ou email)
property_idYesUUID do imovel (retornado pela busca)
visitor_nameYesNome completo do visitante
preferred_dateYesData preferida YYYY-MM-DD
preferred_periodYesPeriodo do dia
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-readOnly, non-idempotent, non-destructive mutation with openWorldHint. The description adds valuable behavioral context beyond annotations: it specifies that this creates a real appointment ('Cria agendamento real'), requires user confirmation before invocation, and involves data sharing with the real estate agency under LGPD. This clarifies the mutation's impact and procedural requirements, though it doesn't detail rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core action, followed by critical procedural and compliance instructions. Every sentence adds essential value: the first states the purpose, the second gives mandatory confirmation steps, and the third covers data sharing. No wasted words, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with 7 parameters, low schema coverage, no output schema), the description does well by covering purpose, usage guidelines, and key behavioral aspects like confirmation and LGPD compliance. It lacks details on return values or error handling, but the annotations provide some safety context (non-destructive, openWorld), making it reasonably complete for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (29%), with only 'data_hora' and 'website' having descriptions. The description compensates by emphasizing that 'imovel, nome, telefone e data/hora' must be confirmed with the user, highlighting the importance and semantics of these four required parameters. However, it doesn't explain optional parameters like 'email' or 'observacoes', leaving some gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Agenda visita a um imovel') and resource ('imovel'), distinguishing it from sibling tools like 'brokeria_buscar_imoveis' (search) or 'brokeria_detalhes_imovel' (view details). It specifies this creates a real appointment, making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage instructions: 'SEMPRE confirme imovel, nome, telefone e data/hora com o usuario antes de chamar.' This gives clear when-to-use guidance (after confirmation) and implies when-not-to-use (if unconfirmed). It also mentions data sharing per LGPD, adding context for privacy considerations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_buscar_imoveisSearch Brazilian real estate listingsA
Read-only
Inspect

Busca imoveis residenciais no catalogo de imobiliarias parceiras da BrokerIA no Brasil. Filtra por cidade, bairro, tipo (casa, apartamento, terreno), faixa de preco, quartos, vagas e area. Retorna lista com foto, endereco aproximado, preco anunciado e id pra consulta de detalhes.

ParametersJSON Schema
NameRequiredDescriptionDefault
tipoNo
limitNo
bairroNoBairro
cidadeNoCidade. Ex: "Campinas"
offsetNo
area_minNoArea minima em m2
descricaoYesDescricao em linguagem natural do imovel desejado
vagas_minNoMinimo de vagas
valor_maxNoValor maximo em reais
valor_minNoValor minimo em reais
finalidadeNo
imobiliariaNoSlug da imobiliaria. Ex: "kasamais"
quartos_maxNoMaximo de quartos
quartos_minNoMinimo de quartos
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable context about the data source (partner agencies) and the critical warning about only presenting returned results, which helps the agent understand behavioral constraints beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that are completely front-loaded with essential information. The first sentence covers purpose and capabilities, the second provides a critical behavioral warning. Zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with comprehensive annotations and good schema coverage, the description provides appropriate context about data sources and critical usage warnings. The main gap is no output schema, but the description doesn't need to explain return values given the tool's straightforward search purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 73% schema description coverage, the schema already documents most parameters well. The description mentions natural language descriptions and structured filters, which aligns with the 'descricao' parameter and other filter parameters, but doesn't add significant semantic value beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for real estate properties available for sale or rent from partner agencies, distinguishing it from siblings like brokeria_detalhes_imovel (details) or brokeria_agendar_visita (schedule visit). It specifies both natural language and structured filtering approaches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (searching real properties from partners) and includes an important exclusion warning about not inventing listings. However, it doesn't explicitly mention when to use alternatives like brokeria_match_imoveis or brokeria_imoveis_proximos.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_comparar_imoveisCompare properties side by sideA
Read-only
Inspect

Compara de 2 a 4 imoveis lado a lado a partir dos ids retornados pelas buscas. Monta uma tabela com preco anunciado, area, quartos, vagas, bairro, imobiliaria responsavel e preco por metro quadrado. Util pra visualizar trade-offs objetivos. Nao indica qual e o "melhor" — a decisao e do usuario.

ParametersJSON Schema
NameRequiredDescriptionDefault
imovel_idsYesLista de 2 a 4 UUIDs de imoveis retornados pela busca
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide important behavioral hints (read-only, non-destructive, idempotent, closed-world). The description adds valuable context beyond this: it specifies the comparison scope (2-3 properties) and lists the specific attributes compared (price, bedrooms, area, neighborhood, condominium, status). This gives the agent concrete information about what the comparison includes, which the annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured: two sentences that efficiently convey purpose and usage guidelines. Every word earns its place with no redundancy or wasted text. The information is front-loaded with the core functionality stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (comparison operation), rich annotations covering safety and behavior, and 100% schema coverage, the description provides good contextual completeness. It explains what gets compared and when to use it. The main gap is no output schema, so the agent doesn't know the comparison format, but the description compensates somewhat by listing compared attributes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the parameter well-documented as 'Lista de 2 ou 3 UUIDs de imoveis para comparar'. The description mentions comparing 2 or 3 properties which aligns with the schema, but doesn't add meaningful semantic information beyond what's already in the structured schema. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: comparing 2-3 properties side-by-side across specific attributes (price, bedrooms, area, neighborhood, condominium, construction status). It uses a specific verb ('Compara') and identifies the resource ('imoveis'), but doesn't explicitly differentiate from sibling tools like 'brokeria_buscar_imoveis' or 'brokeria_detalhes_imovel' beyond the comparison focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Use apos buscar imoveis para ajudar o usuario a escolher' (Use after searching for properties to help the user choose). This gives a logical sequence (after property search) and purpose (aid decision-making), but doesn't explicitly state when NOT to use it or name alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_detalhes_imovelGet details of a specific propertyB
Read-only
Inspect

Retorna os detalhes publicos de um imovel especifico pelo id retornado por uma busca. Inclui fotos, descricao, caracteristicas (quartos, suites, vagas, area), endereco aproximado, preco anunciado e imobiliaria responsavel. Nao calcula financiamento nem avalia elegibilidade em programa de credito.

ParametersJSON Schema
NameRequiredDescriptionDefault
imovel_idYesUUID do imovel
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds minimal behavioral context by specifying what information is returned (e.g., photos, values), but doesn't disclose rate limits, authentication needs, or error handling. With annotations providing core behavioral traits, the description adds some value but not rich details, meeting the lower bar for this dimension.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Retorna informacoes completas de um imovel') and lists key details without unnecessary words. Every part of the sentence adds value by specifying the types of information returned, making it appropriately sized and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema) and rich annotations (covering read-only, non-destructive, idempotent behavior), the description is somewhat complete but has gaps. It explains what information is returned, which helps contextualize the tool, but lacks usage guidelines and doesn't detail output format or error cases. Without an output schema, more description of return values would be beneficial, but annotations mitigate some needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'imovel_id' documented as 'UUID do imovel'. The description doesn't add any parameter-specific semantics beyond what the schema provides (e.g., format examples or validation rules). With high schema coverage, the baseline score is 3, as the description doesn't compensate but also doesn't need to given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retorna informacoes completas de um imovel' (Returns complete information about a property). It specifies the verb ('retorna') and resource ('imovel'), and lists the types of information returned (photos, description, characteristics, values, location, and real estate agency). However, it doesn't explicitly differentiate from sibling tools like 'brokeria_buscar_imoveis' (search properties), which might return similar data but for multiple properties.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a property ID), exclusions, or comparisons to siblings such as 'brokeria_buscar_imoveis' for broader searches or 'brokeria_imoveis_proximos' for location-based queries. Usage is implied by the context of having a specific property ID, but no explicit instructions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_imoveis_proximosFind properties near a locationA
Read-only
Inspect

Lista imoveis proximos a um endereco, CEP ou coordenada geografica de referencia. Retorna ate 20 imoveis ordenados por distancia em linha reta, com foto, preco anunciado, caracteristicas basicas e distancia em km. Util pra usuarios que priorizam morar perto de um local especifico (trabalho, escola).

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude (opcional, alternativa a referencia)
lngNoLongitude (opcional, alternativa a referencia)
limitNo
raio_kmNoRaio de busca em km (padrao 5)
referenciaNoEndereco, CEP, nome de bairro ou ponto de referencia (ex: "Av Paulista 1578", "13084-060")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent, and open-world operation. The description adds useful context beyond annotations by specifying the default radius and implying location-based search behavior, but it does not detail aspects like rate limits, authentication needs, or response format, keeping it from a score of 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey the tool's purpose and usage without unnecessary details. Every sentence adds value, making it concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (location-based search), annotations cover safety and behavior well, and schema coverage is high. However, there is no output schema, and the description does not explain return values or error handling, which slightly limits completeness. It is mostly adequate but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 75%, with parameters like 'latitude' and 'longitude' well-documented in the schema. The description adds minimal value beyond the schema by mentioning the default radius and search examples, but it does not explain parameters like 'limit' or provide additional semantics, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Busca imoveis proximos a uma localizacao (latitude/longitude) ou nome de lugar.' It specifies the verb ('Busca'), resource ('imoveis'), and scope ('proximos a uma localizacao'), distinguishing it from siblings like 'brokeria_buscar_imoveis' (general search) and 'brokeria_detalhes_imovel' (details).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage with examples ('Ex: "perto da UNICAMP", "proximo ao Shopping Iguatemi"') and mentions the default radius ('Raio padrao: 5km'), but it does not explicitly state when to use this tool versus alternatives like 'brokeria_buscar_imoveis' or other siblings, which would be needed for a score of 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.