Skip to main content
Glama

refpro-mcp

Server Details

Deterministic real estate underwriting, deal analysis & reports: Fix & Flip, BRRRR, construction.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: deal_quick_check runs a financial analysis, methodology_faq answers methodology questions, and sample_report_fetch retrieves sample report sections. No overlap or ambiguity.

Naming Consistency3/5

Names use snake_case but follow inconsistent patterns: 'deal_quick_check' is verb_noun, 'methodology_faq' is noun_noun (no verb), and 'sample_report_fetch' is noun_noun_verb. The pattern is not uniform, though names are still descriptive.

Tool Count4/5

With only 3 tools, the server is tightly focused on quick checks, methodology info, and sample reports. The count is slightly low but fits the narrow scope well, no excess or missing tools for its purpose.

Completeness4/5

The tool set covers the essential operations for its domain: analysis, methodology explanation, and sample data retrieval. Minor gap could be a tool to run a full report, but the set is complete for the stated 'quick check' and reference purposes.

Available Tools

3 tools
deal_quick_checkAInspect

Run a deterministic, lender-grade quick check on a real-estate deal. Inputs: deal_type (FF | BRRRR | NC), purchase_price, arv_or_value (ARV for FF, refinance value for BRRRR, sellout for NC), rehab_budget, zip_code; optional annual_debt_service, noi_annual, units, units_to_hold. Returns a PASS / MARGINAL / FAIL verdict, the key financial metrics for that deal type (MAO and margin for FF, TPC and DSCR for BRRRR, TPC plus margin or DSCR for NC), and a one-paragraph summary. Math is identical to the underwriting pipeline used in Refpro's full deal pack — no estimates, no rounding shortcuts.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsNo
zip_codeYes
deal_typeYes
noi_annualNo
arv_or_valueYes
rehab_budgetYes
units_to_holdNo
purchase_priceYes
annual_debt_serviceNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: deterministic, no estimates, no rounding, returns a verdict and metrics. No destructive or side effects are implied, and the tool is non-mutating.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with main purpose and provides details on inputs and outputs. It is somewhat lengthy but every sentence adds value. Could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains return values (verdict, metrics, summary). It covers required inputs and optional ones, but omits error handling or edge cases. For a complex tool, it is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning by explaining the required parameters and how arv_or_value varies by deal type. It lists optional parameters but doesn't detail units/units_to_hold beyond existence. Schema coverage is 0%, so description partially compensates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it runs a deterministic, lender-grade quick check on a real-estate deal, specifying input deal types and output verdict/metrics/summary. It distinguishes from sibling tools (methodology_faq, sample_report_fetch) which are unrelated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for preliminary deal analysis with specific deal types, and notes the math mirrors the full underwriting pipeline. However, it lacks explicit guidance on when not to use or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

methodology_faqAInspect

Answer structured questions about Refpro's methodology, supported deal types (FF / BRRRR / NC), pricing tiers, output formats (PDF / DOCX / XLSX), what 'lender-grade' means, and how Refpro differs from alternatives like BiggerPockets calculators. Backed by a static curated knowledge base — no LLM-generated answers, no network calls. Returns a 2–4 sentence answer, a list of related topic titles, and a canonical source URL on refpro.ai. Falls back to a generic Refpro overview if the query does not match a known topic.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and excels: it discloses the static curated knowledge base, absence of LLM generation or network calls, return format (2-4 sentence answer, related topics, source URL), and fallback behavior. This fully informs the agent of the tool's constraints and outputs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is 5 sentences, front-loaded with the core purpose, then detailing specifics. Every sentence adds value, and there is no redundancy. It is concise yet informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one param, no output schema) and lack of annotations, the description covers all necessary aspects: purpose, behavioral traits, parameter usage, and fallback. It is complete for an agent to understand when and how to invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'query' has no schema description (0% coverage), so the description must compensate. It does so by listing example topics (FF/BRRRR/NC, pricing, etc.) and indicating the query should be a structured question about those topics. While it doesn't specify exact formatting, it provides enough semantic guidance for an agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers structured questions about Refpro's methodology, supported deal types, pricing tiers, output formats, and more. It identifies the specific resource (static knowledge base) and distinguishes from sibling tools by focusing on factual Q&A rather than deal checks or report fetching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for factual queries about Refpro methodology but does not explicitly state when to use vs alternatives. No 'when-not-to-use' guidance is provided, though the tool's purpose is clear from the context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sample_report_fetchAInspect

Fetch a sanitized public sample section from Refpro's reference deal library. Inputs: deal_type (FF | BRRRR | NC) and section (summary | financials | risk_notes | full). Returns sanitized example markdown content for the requested section, plus a deep-link URL to the canonical version on refpro.ai. The 'full' section stitches summary, financials, and risk_notes in order. All content is sanitized example data — not a real customer deal — and is safe to surface verbatim to end users. No network calls; samples are loaded once at module init.

ParametersJSON Schema
NameRequiredDescriptionDefault
sectionYes
deal_typeYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses that data is sanitized, not a real deal, safe to surface, and no network calls. It explains 'full' section behavior. Lacks details on error handling but sufficient for local fetch.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four concise sentences, front-loaded with purpose, then inputs, outputs, and safety note. No redundancy or unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description covers return content (markdown, URL) and behavior. No missing elements for this simple fetch tool given context signals.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, but the description explains the enum values for deal_type and section, including special behavior for 'full'. This adds essential meaning beyond the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches a sanitized public sample section from Refpro's reference deal library, specifying inputs (deal_type, section) and outputs (markdown content, deep-link URL). It distinguishes from sibling tools (deal_quick_check, methodology_faq) which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining example deal data and explicitly notes the content is safe to surface verbatim to end users. However, it does not explicitly compare to siblings or state when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources