Skip to main content
Glama

BrokerIA Imoveis

Server Details

Search Brazilian real estate, simulate financing, qualify leads, schedule visits.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
raklapimenta/brokeria-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 9 of 9 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, such as searching properties, scheduling visits, or simulating financing. However, there is some overlap between brokeria_buscar_imoveis and brokeria_match_imoveis, as both return property listings, which could cause confusion about when to use each. The descriptions help clarify the intended workflows, but the distinction is not immediately obvious without reading the detailed instructions.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with the prefix 'brokeria_' followed by a descriptive verb_noun combination, such as 'agendar_visita' or 'simular_financiamento'. This uniformity makes the tool set predictable and easy to navigate, enhancing usability for agents.

Tool Count5/5

With 9 tools, the server is well-scoped for its real estate brokerage domain. Each tool serves a specific function in the property search, financing, and lead management workflows, and there are no redundant or trivial tools. The count aligns with typical MCP servers and supports comprehensive agent interactions.

Completeness4/5

The tool set covers key aspects of real estate brokerage, including property search, comparison, financing simulation, lead management, and visit scheduling. A minor gap is the lack of tools for updating or canceling visits or leads, which might require workarounds. Overall, the surface supports core workflows effectively with only slight omissions.

Available Tools

9 tools
brokeria_agendar_visitaAgendar Visita ao ImóvelAInspect

Agenda visita a um imovel. Cria agendamento real — SEMPRE confirme imovel, nome, telefone e data/hora com o usuario antes de chamar. Informe que os dados serao compartilhados com a imobiliaria conforme LGPD.

ParametersJSON Schema
NameRequiredDescriptionDefault
nomeYes
emailNo
websiteNoDo not fill this field
telefoneYes
data_horaYesISO 8601
imovel_idYes
observacoesNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-readOnly, non-idempotent, non-destructive mutation with openWorldHint. The description adds valuable behavioral context beyond annotations: it specifies that this creates a real appointment ('Cria agendamento real'), requires user confirmation before invocation, and involves data sharing with the real estate agency under LGPD. This clarifies the mutation's impact and procedural requirements, though it doesn't detail rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core action, followed by critical procedural and compliance instructions. Every sentence adds essential value: the first states the purpose, the second gives mandatory confirmation steps, and the third covers data sharing. No wasted words, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with 7 parameters, low schema coverage, no output schema), the description does well by covering purpose, usage guidelines, and key behavioral aspects like confirmation and LGPD compliance. It lacks details on return values or error handling, but the annotations provide some safety context (non-destructive, openWorld), making it reasonably complete for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (29%), with only 'data_hora' and 'website' having descriptions. The description compensates by emphasizing that 'imovel, nome, telefone e data/hora' must be confirmed with the user, highlighting the importance and semantics of these four required parameters. However, it doesn't explain optional parameters like 'email' or 'observacoes', leaving some gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Agenda visita a um imovel') and resource ('imovel'), distinguishing it from sibling tools like 'brokeria_buscar_imoveis' (search) or 'brokeria_detalhes_imovel' (view details). It specifies this creates a real appointment, making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage instructions: 'SEMPRE confirme imovel, nome, telefone e data/hora com o usuario antes de chamar.' This gives clear when-to-use guidance (after confirmation) and implies when-not-to-use (if unconfirmed). It also mentions data sharing per LGPD, adding context for privacy considerations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_buscar_imoveisBuscar Imóveis no BrokerIAA
Read-onlyIdempotent
Inspect

Busca imoveis REAIS disponiveis para venda ou aluguel nas imobiliarias parceiras do BrokerIA. Aceita descricoes em linguagem natural ou filtros estruturados. IMPORTANTE: so apresente imoveis retornados por esta tool — NUNCA invente listagens.

ParametersJSON Schema
NameRequiredDescriptionDefault
mcmvNoFiltrar apenas imoveis MCMV (Minha Casa Minha Vida, ate R$350mil)
tipoNo
limitNo
bairroNoBairro
cidadeNoCidade. Ex: "Campinas"
offsetNo
area_minNoArea minima em m2
descricaoYesDescricao em linguagem natural do imovel desejado
vagas_minNoMinimo de vagas
valor_maxNoValor maximo em reais
valor_minNoValor minimo em reais
finalidadeNo
imobiliariaNoSlug da imobiliaria. Ex: "kasamais"
quartos_maxNoMaximo de quartos
quartos_minNoMinimo de quartos
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable context about the data source (partner agencies) and the critical warning about only presenting returned results, which helps the agent understand behavioral constraints beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that are completely front-loaded with essential information. The first sentence covers purpose and capabilities, the second provides a critical behavioral warning. Zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with comprehensive annotations and good schema coverage, the description provides appropriate context about data sources and critical usage warnings. The main gap is no output schema, but the description doesn't need to explain return values given the tool's straightforward search purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 73% schema description coverage, the schema already documents most parameters well. The description mentions natural language descriptions and structured filters, which aligns with the 'descricao' parameter and other filter parameters, but doesn't add significant semantic value beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for real estate properties available for sale or rent from partner agencies, distinguishing it from siblings like brokeria_detalhes_imovel (details) or brokeria_agendar_visita (schedule visit). It specifies both natural language and structured filtering approaches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (searching real properties from partners) and includes an important exclusion warning about not inventing listings. However, it doesn't explicitly mention when to use alternatives like brokeria_match_imoveis or brokeria_imoveis_proximos.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_comparar_imoveisComparar ImóveisA
Read-onlyIdempotent
Inspect

Compara 2 ou 3 imoveis lado a lado: preco, quartos, area, bairro, condominio, status obra. Use apos buscar imoveis para ajudar o usuario a escolher.

ParametersJSON Schema
NameRequiredDescriptionDefault
imovel_idsYesLista de 2 ou 3 UUIDs de imoveis para comparar
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide important behavioral hints (read-only, non-destructive, idempotent, closed-world). The description adds valuable context beyond this: it specifies the comparison scope (2-3 properties) and lists the specific attributes compared (price, bedrooms, area, neighborhood, condominium, status). This gives the agent concrete information about what the comparison includes, which the annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured: two sentences that efficiently convey purpose and usage guidelines. Every word earns its place with no redundancy or wasted text. The information is front-loaded with the core functionality stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (comparison operation), rich annotations covering safety and behavior, and 100% schema coverage, the description provides good contextual completeness. It explains what gets compared and when to use it. The main gap is no output schema, so the agent doesn't know the comparison format, but the description compensates somewhat by listing compared attributes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the parameter well-documented as 'Lista de 2 ou 3 UUIDs de imoveis para comparar'. The description mentions comparing 2 or 3 properties which aligns with the schema, but doesn't add meaningful semantic information beyond what's already in the structured schema. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: comparing 2-3 properties side-by-side across specific attributes (price, bedrooms, area, neighborhood, condominium, construction status). It uses a specific verb ('Compara') and identifies the resource ('imoveis'), but doesn't explicitly differentiate from sibling tools like 'brokeria_buscar_imoveis' or 'brokeria_detalhes_imovel' beyond the comparison focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Use apos buscar imoveis para ajudar o usuario a escolher' (Use after searching for properties to help the user choose). This gives a logical sequence (after property search) and purpose (aid decision-making), but doesn't explicitly state when NOT to use it or name alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_detalhes_imovelDetalhes do ImóvelB
Read-onlyIdempotent
Inspect

Retorna informacoes completas de um imovel: fotos, descricao, caracteristicas, valores, localizacao e imobiliaria.

ParametersJSON Schema
NameRequiredDescriptionDefault
imovel_idYesUUID do imovel
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds minimal behavioral context by specifying what information is returned (e.g., photos, values), but doesn't disclose rate limits, authentication needs, or error handling. With annotations providing core behavioral traits, the description adds some value but not rich details, meeting the lower bar for this dimension.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Retorna informacoes completas de um imovel') and lists key details without unnecessary words. Every part of the sentence adds value by specifying the types of information returned, making it appropriately sized and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema) and rich annotations (covering read-only, non-destructive, idempotent behavior), the description is somewhat complete but has gaps. It explains what information is returned, which helps contextualize the tool, but lacks usage guidelines and doesn't detail output format or error cases. Without an output schema, more description of return values would be beneficial, but annotations mitigate some needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'imovel_id' documented as 'UUID do imovel'. The description doesn't add any parameter-specific semantics beyond what the schema provides (e.g., format examples or validation rules). With high schema coverage, the baseline score is 3, as the description doesn't compensate but also doesn't need to given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retorna informacoes completas de um imovel' (Returns complete information about a property). It specifies the verb ('retorna') and resource ('imovel'), and lists the types of information returned (photos, description, characteristics, values, location, and real estate agency). However, it doesn't explicitly differentiate from sibling tools like 'brokeria_buscar_imoveis' (search properties), which might return similar data but for multiple properties.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a property ID), exclusions, or comparisons to siblings such as 'brokeria_buscar_imoveis' for broader searches or 'brokeria_imoveis_proximos' for location-based queries. Usage is implied by the context of having a specific property ID, but no explicit instructions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_imoveis_proximosImóveis Próximos a um LocalA
Read-onlyIdempotent
Inspect

Busca imoveis proximos a uma localizacao (latitude/longitude) ou nome de lugar. Ex: "perto da UNICAMP", "proximo ao Shopping Iguatemi". Raio padrao: 5km.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
raio_kmNoRaio de busca em km (max 20)
latitudeYesLatitude do ponto central
longitudeYesLongitude do ponto central
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent, and open-world operation. The description adds useful context beyond annotations by specifying the default radius and implying location-based search behavior, but it does not detail aspects like rate limits, authentication needs, or response format, keeping it from a score of 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey the tool's purpose and usage without unnecessary details. Every sentence adds value, making it concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (location-based search), annotations cover safety and behavior well, and schema coverage is high. However, there is no output schema, and the description does not explain return values or error handling, which slightly limits completeness. It is mostly adequate but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 75%, with parameters like 'latitude' and 'longitude' well-documented in the schema. The description adds minimal value beyond the schema by mentioning the default radius and search examples, but it does not explain parameters like 'limit' or provide additional semantics, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Busca imoveis proximos a uma localizacao (latitude/longitude) ou nome de lugar.' It specifies the verb ('Busca'), resource ('imoveis'), and scope ('proximos a uma localizacao'), distinguishing it from siblings like 'brokeria_buscar_imoveis' (general search) and 'brokeria_detalhes_imovel' (details).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage with examples ('Ex: "perto da UNICAMP", "proximo ao Shopping Iguatemi"') and mentions the default radius ('Raio padrao: 5km'), but it does not explicitly state when to use this tool versus alternatives like 'brokeria_buscar_imoveis' or other siblings, which would be needed for a score of 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_match_imoveisMatch de Imóveis — RENDA PRIMEIRO (Fluxo 2)A
Read-onlyIdempotent
Inspect

FLUXO 2 — RENDA-FIRST (UNICO NO MUNDO via LLM): use quando o cliente NAO tem um imovel em mente e quer saber "o que eu posso comprar com minha renda?". MOTOR PROPRIETARIO BROKERIA — o MESMO que os corretores usam. PARA MATCH FIEL, colete as MESMAS variaveis do simulador antes de chamar: renda (familiar), idade (impacta prazo), entrada (ato), fgts, primeiro_imovel, estado, cidade. Detecta MCMV (renda ate R$12k = 4,75% a.a.) ou SBPE, aplica subsidio MCMV (ate R$40k), retorna imoveis ORDENADOS POR VIABILIDADE FINANCEIRA com badge VERDE/AMARELO/VERMELHO, score 0-40, ato, parcelas durante obra, comprometimento. SEMPRE prefira esta tool a brokeria_buscar_imoveis quando souber a renda. Use brokeria_simular_financiamento (Fluxo 1) APENAS quando o cliente ja escolheu um imovel.

ParametersJSON Schema
NameRequiredDescriptionDefault
fgtsNoFGTS disponivel. Quando > 0, aplica taxa MCMV reduzida.
idadeNoIdade do cliente. RECOMENDADO — afeta prazo maximo (banco nao financia alem dos 80).
estadoNoUF de interesse. Ex: "SP".
bairrosNoBairros especificos (opcional).
cidadesNoCidades de interesse. Ex: ["Campinas", "Hortolandia"]
entradaNoAto (entrada em dinheiro hoje). Se nao souber, use 0.
nome_clienteNoNome do cliente (opcional, melhora analise).
renda_clienteYesOBRIGATORIO. Renda FAMILIAR mensal em reais.
primeiro_imovelNoPrimeiro imovel — afeta MCMV e ITBI.
tem_dependentesNoCliente tem dependentes — aumenta subsidio MCMV significativamente.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent operation with open-world data. The description adds valuable behavioral context beyond annotations: it explains the proprietary matching engine, mentions automatic detection of MCMV/SBPE programs, subsidy application, and describes the output format (properties ordered by financial viability with color badges and scores). However, it doesn't mention rate limits or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose and usage scenario. Every sentence adds value: the first establishes the unique flow, the second describes the engine and required variables, the third explains the matching logic and output format, and the fourth provides sibling tool guidance. Some Portuguese/English mixing slightly affects readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (financial property matching with subsidy detection) and rich annotations, the description provides good contextual completeness. It explains the matching logic, output format, and sibling relationships. The main gap is the lack of output schema, so the description must fully explain return values, which it does adequately with details about ordering, badges, scores, and financial metrics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 10 parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it lists the same variables needed for 'match fiel' but doesn't provide additional context about format, constraints, or interactions between parameters that aren't already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to match properties based on the client's financial capacity when they don't have a specific property in mind. It specifies the verb 'match' and resource 'imóveis' (properties), and explicitly distinguishes it from sibling tools like brokeria_buscar_imoveis and brokeria_simular_financiamento by outlining when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('when the client doesn't have a property in mind and wants to know what they can buy with their income'), when not to use it (use brokeria_simular_financiamento only when the client has already chosen a property), and alternatives (prefer this over brokeria_buscar_imoveis when income is known). This covers both positive and negative usage scenarios with named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_qualificar_leadQualificar e Registrar LeadAInspect

Registra lead qualificado no CRM BrokerIA. Distribuicao automatica por roleta. Cria registro real — SEMPRE confirme nome, telefone e imobiliaria com o usuario antes de chamar. Informe que os dados serao compartilhados com a imobiliaria conforme LGPD.

ParametersJSON Schema
NameRequiredDescriptionDefault
nomeYes
emailNo
websiteNoDo not fill this field
telefoneYes
imovel_idNo
valor_fgtsNo
imobiliariaYesSlug da imobiliaria
observacoesNo
renda_mensalNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses that registration triggers 'Distribuicao automatica por roleta' (automatic distribution via roulette) and creates a 'registro real' (real record). Annotations cover basic hints (not read-only, open world, non-idempotent, non-destructive), but the description provides specific workflow consequences. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded: the first sentence states the core purpose, followed by critical behavioral and usage instructions. Every sentence adds essential value—no wasted words. The structure flows logically from action to consequences to prerequisites.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool (readOnlyHint=false) with no output schema and low schema coverage, the description does well by covering purpose, behavioral effects, and critical parameters. It lacks details on return values or error handling, but given the annotations and context, it provides sufficient guidance for safe invocation. The sibling tools context is implicitly addressed through differentiation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 22% schema description coverage, the description compensates by emphasizing critical parameters: it explicitly names 'nome, telefone e imobiliaria' as required confirmation items, aligning with the three required parameters in the schema. However, it doesn't explain optional parameters like 'valor_fgts' or 'renda_mensal', leaving some semantic gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Registra lead qualificado no CRM BrokerIA') and distinguishes it from siblings by focusing on lead qualification/registration rather than property search, scheduling, or simulation. It specifies the resource (lead) and the system (CRM BrokerIA).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'SEMPRE confirme nome, telefone e imobiliaria com o usuario antes de chamar' (when to use) and 'Informe que os dados serao compartilhados com a imobiliaria conforme LGPD' (privacy disclosure requirement). It also implies this tool is for qualified leads, distinguishing it from sibling tools like brokeria_buscar_imoveis or brokeria_simular_financiamento.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_simular_financiamentoSimular Financiamento — IMÓVEL ESCOLHIDO (Fluxo 1)A
Read-onlyIdempotent
Inspect

FLUXO 1 — IMOVEL-FIRST: use quando o cliente JA tem um imovel especifico em mente. Chama o MOTOR REAL DA BROKERIA (caixa-simulation + calculate-cet) com CET, MIP, DFI, taxas administrativas — exatamente o mesmo que os corretores usam. NAO existe "rejeitado": se a renda nao cobre o financiamento total, retorna o "valorRestanteEntrada" (quanto a mais o cliente precisa colocar de entrada). REGRA OURO: o financiamento e SEMPRE o MIN(30% renda, LTV maximo do programa) — o cliente pode comprar imovel mais caro com mais entrada. Detecta MCMV automaticamente (renda <= R$12k + primeiro imovel), aplica subsidio (mais alto se tem_dependentes), busca taxa real do DB por faixa (taxa_with_fgts se fgtsAcima36Meses=true). Aceita imovel_id para buscar valor/cidade/estado/tipo direto do DB sem precisar perguntar. PERGUNTE renda + sistema (SAC/PRICE — padrao PRICE para renda <8k). Use brokeria_match_imoveis (Fluxo 2) quando o cliente nao tem imovel em mente.

ParametersJSON Schema
NameRequiredDescriptionDefault
fgtsNoFGTS disponivel.
idadeNoIdade. RECOMENDADO — calcula prazo maximo (banco nao financia alem dos 80 anos).
entradaNoAto disponivel (entrada em dinheiro hoje). Padrao 0.
sistemaNoSAC (decrescente) ou PRICE (fixa). Renda < R$8k → padrao PRICE. Renda >= R$8k → PERGUNTE.
imovel_idNoUUID do imovel (de buscar/match/detalhes). RECOMENDADO — busca valor, cidade, estado, tipo do DB. Nao precisa perguntar essas infos.
dependentesNoCliente tem dependentes (filhos, conjuge dependente). Aumenta subsidio MCMV.
prazo_mesesNoPrazo em meses. Padrao 420 (35 anos) limitado pela idade.
tipo_imovelNoNovo (lancamento/planta) ou usado. Afeta LTV.
valor_imovelNoValor do imovel em reais. Use se nao tiver imovel_id.
renda_familiarYesOBRIGATORIO. Renda FAMILIAR mensal (somar conjuge).
modalidade_sbpeNoModalidade SBPE quando renda > R$12k. Padrao tr_plus.
primeiro_imovelNoPrimeiro imovel. RECOMENDADO — necessario para MCMV.
fgts_acima_36_mesesNoCliente tem FGTS ha mais de 36 meses (libera taxa MCMV reduzida).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains that there's no 'rejected' outcome but instead returns 'valorRestanteEntrada' (remaining down payment amount), describes automatic MCMV detection logic, subsidy application, and database rate lookup. While annotations cover safety (readOnly, non-destructive, idempotent), the description provides important operational details about how the simulation works.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense but well-structured with clear sections: flow context, behavioral rules, automatic detection logic, and sibling tool guidance. While somewhat lengthy, every sentence adds value and it's front-loaded with the most important information about when to use this tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex simulation tool with 13 parameters and no output schema, the description provides substantial context about how the simulation works, behavioral outcomes, and business rules. It could benefit from more detail about the return format since there's no output schema, but it covers the essential operational logic well given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 13 parameters thoroughly. The description mentions some parameters like 'renda' and 'sistema' in context, but doesn't add significant semantic value beyond what's already in the parameter descriptions. The baseline score of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Simular Financiamento' (simulate financing) for a specific property using the real brokerage engine. It explicitly distinguishes this from its sibling 'brokeria_match_imoveis' (Fluxo 2) for when the client doesn't have a property in mind, establishing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'use quando o cliente JA tem um imovel especifico em mente' (use when the client already has a specific property in mind) and 'Use brokeria_match_imoveis (Fluxo 2) quando o cliente nao tem imovel em mente' (use when the client doesn't have a property in mind). It also includes a golden rule about financing calculation and when to ask about the payment system.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brokeria_status_leadStatus do LeadA
Read-onlyIdempotent
Inspect

Consulta o status de um lead ja registrado via MCP. Retorna estagio do funil e se ja tem corretor atribuido. Requer telefone e slug da imobiliaria.

ParametersJSON Schema
NameRequiredDescriptionDefault
telefoneYesTelefone do lead (formato brasileiro)
imobiliariaYesSlug da imobiliaria
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations by specifying it returns funnel stage and broker assignment information, which the annotations don't cover. While annotations already indicate this is a read-only, non-destructive, idempotent operation with closed-world data, the description usefully clarifies what specific information is retrieved. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with three sentences that each serve distinct purposes: stating the tool's function, specifying what information it returns, and listing required parameters. No wasted words, and the most important information (what the tool does) comes first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only query tool with good annotations and full parameter documentation, the description provides adequate context about what information is returned. However, without an output schema, it could benefit from more detail about the return format (e.g., specific funnel stages, broker identification format). The description covers the essential 'what' but leaves some implementation details unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters (telefone format, imobiliaria as slug). The description mentions these parameters but doesn't add meaningful semantic context beyond what's in the schema, such as explaining why these particular identifiers are required or how they relate to the lead lookup.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Consulta' - query/check), the resource ('status de um lead'), and distinguishes it from siblings by focusing on existing leads rather than creating or modifying them. It explicitly mentions checking funnel stage and broker assignment, which differentiates it from tools like 'brokeria_qualificar_lead' that might modify lead status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('lead ja registrado via MCP' - lead already registered via MCP) and specifies required parameters. However, it doesn't explicitly state when NOT to use it or name specific alternative tools for different scenarios, though the sibling list suggests alternatives like 'brokeria_qualificar_lead' for modifying lead status.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.