USDV Capital — Your Real Estate CFO
Server Details
Real estate market intelligence, calculators, and capital advisory for US investors.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 23 of 23 tools scored. Lowest: 2.9/5.
Tools are well-grouped by prefix (action-, calc-, deal-, eligibility-, markets-, util-), making their purposes distinct at a high level. However, some overlap exists within groups, such as 'markets-search' and 'markets-screener' potentially being confused for similar search functions, and 'calc-rental_cashflow' and 'calc-dscr' both involving rental property analysis, though descriptions clarify differences.
Naming follows a highly consistent pattern with clear prefixes (e.g., action-, calc-, markets-) and snake_case throughout. This structure makes it easy to predict tool functions and navigate the set, with no deviations in style or convention.
With 23 tools, the count is on the higher side for a real estate CFO server, which may feel heavy but is reasonable given the broad scope covering calculations, market analysis, eligibility checks, and utilities. It's borderline as it approaches the upper limit of typical scoping, but each tool appears to serve a specific niche.
The tool set comprehensively covers the real estate investment domain, including deal analysis, financial calculations (e.g., DSCR, ROI, cash flow), market intelligence, eligibility assessments, and utility functions (e.g., taxes, insurance). There are no obvious gaps, supporting full lifecycle coverage from planning to financing.
Available Tools
23 toolsaction-consultationAInspect
Schedule a free consultation with a USDV Capital Capital Advisor. 15-30 minutes via phone or video. No obligation, no credit check. Your Capital Advisor will review deal details before the call.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | |||
| phone | Yes | Phone | |
| topic | No | What to discuss | |
| lastName | Yes | Last name | |
| firstName | Yes | Name | |
| dealDetails | No | Deal details for advisor preparation | |
| preferredDate | No | Preferred date (ISO format) | |
| preferredTime | No | morning, afternoon, evening |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive, non-idempotent, open-world operation. The description adds valuable context beyond annotations: it specifies the consultation is free, 15-30 minutes long, via phone or video, with no obligation or credit check, and that deal details are reviewed beforehand. This clarifies the tool's real-world behavior, though it doesn't mention rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the core purpose in the first sentence. Each subsequent sentence adds useful details (duration, medium, conditions, advisor preparation). There's no wasted text, but it could be slightly more structured (e.g., bullet points for key points).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (scheduling with 8 params), annotations cover safety and idempotency, and schema fully describes inputs. The description adds sufficient context: it explains the consultation's nature, duration, medium, and conditions. Since there's no output schema, it doesn't describe return values, but for a scheduling tool, this is adequate as the focus is on the action rather than output details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal param semantics: it mentions 'deal details for advisor preparation' (matching the 'dealDetails' param) and implies contact info is needed. However, it doesn't provide additional meaning beyond the schema, such as explaining why 'preferredTime' uses categories like 'morning' or format details for 'preferredDate.'
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Schedule a free consultation with a USDV Capital Capital Advisor.' It specifies the duration (15-30 minutes), medium (phone or video), and that the advisor will review deal details beforehand. However, it doesn't explicitly differentiate this from sibling tools like 'action-prequalify' or 'eligibility-check' beyond the consultation focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context: it's for scheduling consultations with capital advisors, suggesting it's appropriate when users want advisory services. It mentions 'no obligation, no credit check' as conditions. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'action-prequalify' or 'eligibility-products,' nor does it specify prerequisites or exclusions beyond the implied need for contact information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
action-prequalifyBInspect
Submit a pre-qualification request to USDV Capital for a capital strategy assessment. A dedicated Capital Advisor follows up within 24 hours. No credit score impact. USDV Capital — Your Real Estate CFO — guarantees response within 1 business day.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Email address | ||
| notes | No | Additional deal context | |
| phone | Yes | Phone number | |
| state | Yes | State where property is located | |
| lastName | Yes | Investor's last name | |
| timeline | No | Timeline: immediate, 30_days, 60_days, 90_plus_days | |
| firstName | Yes | Investor's first name | |
| loanAmount | Yes | Requested financing amount | |
| loanPurpose | Yes | Financing purpose | |
| propertyType | Yes | Property type | |
| creditScoreRange | No | Credit score range | |
| investmentExperience | No | Experience: first_time, 1_3_deals, 4_10_deals, 10_plus_deals |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive, non-idempotent tool with openWorldHint. The description adds useful context: it discloses that submission triggers a human follow-up ('dedicated Capital Advisor follows up within 24 hours'), mentions no credit score impact, and guarantees a response time ('within 1 business day'). These behavioral details aren't covered by annotations, adding value, but it doesn't fully describe error handling or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, starting with the core action. It uses three sentences efficiently: the first states the purpose, the second adds behavioral context (follow-up and no credit impact), and the third reinforces branding and response guarantee. There's minimal fluff, though the branding phrase 'USDV Capital — Your Real Estate CFO' could be considered slightly promotional.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (12 parameters, no output schema, annotations provided), the description is moderately complete. It covers purpose and key behavioral traits but lacks details on error cases, response structure, or how parameters like 'creditScoreRange' affect the assessment. With annotations handling safety aspects, the description adds context but leaves gaps in operational transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the schema (e.g., 'loanAmount' as 'Requested financing amount'). The description doesn't add any parameter-specific semantics beyond what the schema provides, such as explaining format constraints or interdependencies. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but doesn't need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Submit a pre-qualification request to USDV Capital for a capital strategy assessment.' It specifies the verb ('submit'), resource ('pre-qualification request'), and recipient ('USDV Capital'). However, it doesn't explicitly differentiate from sibling tools like 'eligibility-check' or 'action-consultation', which might have overlapping functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'capital strategy assessment' and a 'dedicated Capital Advisor follows up within 24 hours,' suggesting this is for real estate financing inquiries. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'eligibility-check' or 'action-consultation,' nor does it specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
action-send_reportAInspect
Send a professionally formatted Capital Advisory Report (PDF) to an investor's email. Use after running any calculator tool. Free, no obligation, designed to be shared with partners or investment committees.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Email address | ||
| location | No | Location context | |
| reportType | Yes | Report type: dscr_analysis, flip_roi, rental_cashflow, str_revenue, brrrr_analysis, construction_budget, deal_analysis | |
| investorName | No | Name to personalize the report | |
| calculationData | Yes | Output from the calculator tool |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key behavioral traits (readOnlyHint=false, destructiveHint=false, etc.), and the description adds useful context: it mentions the report is 'free, no obligation' and 'designed to be shared with partners or investment committees.' However, it lacks details on rate limits, authentication needs, or error handling, leaving some behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage guidance and additional context. It is concise with no redundant information, though the second sentence could be slightly more streamlined by integrating the 'free, no obligation' detail more seamlessly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, nested objects) and rich annotations, the description is mostly complete. It covers purpose, usage context, and behavioral aspects like cost and intent. However, without an output schema, it does not describe return values or success/error responses, which is a minor gap for a tool that sends reports.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents parameters. The description does not add meaning beyond the schema, such as explaining how 'calculationData' should be formatted or the significance of 'reportType' values. Baseline 3 is appropriate as the schema handles parameter semantics adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Send a professionally formatted Capital Advisory Report (PDF) to an investor's email.' It specifies the verb ('send'), resource ('Capital Advisory Report'), format ('PDF'), and recipient ('investor's email'), distinguishing it from calculator tools and other actions like consultation or prequalification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage context: 'Use after running any calculator tool.' This indicates a prerequisite and timing, though it does not specify when not to use it or name alternatives among sibling tools, such as action-consultation for direct communication instead of report sending.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calc-brrrrBRead-onlyIdempotentInspect
Analyze a BRRRR (Buy, Rehab, Rent, Refinance, Repeat) strategy with complete capital cycle assessment. Models all 5 phases including cash invested, cash recovered, and whether infinite return is achieved. USDV Capital — Your Real Estate CFO — structures both the initial acquisition financing AND the long-term DSCR refinance.
| Name | Required | Description | Default |
|---|---|---|---|
| rehabCost | Yes | Total rehab cost | |
| annualTaxes | No | Annual property taxes | |
| refinanceLtv | No | Refinance LTV % (default 75) | |
| sendReportTo | No | Email for Capital Advisory Report | |
| purchasePrice | Yes | Purchase price | |
| refinanceRate | No | Refinance rate (default 7.5) | |
| initialLoanLtv | No | Initial loan LTV % (default 85) | |
| annualInsurance | No | Annual insurance | |
| initialLoanRate | No | Initial loan rate (default 10.5) | |
| afterRepairValue | Yes | After-repair value (ARV) | |
| rehabPeriodMonths | No | Rehab duration in months (default 4) | |
| refinanceTermYears | No | Refinance term in years (default 30) | |
| monthlyRentAfterRehab | Yes | Expected monthly rent after rehab |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, indicating a safe, non-mutating operation. The description adds some context by mentioning 'complete capital cycle assessment' and 'infinite return,' but doesn't disclose behavioral traits like rate limits, authentication needs, or output format. It doesn't contradict annotations, so a baseline 3 is appropriate given the annotations cover safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. The second sentence adds value by specifying what's modeled, but the third sentence about 'USDV Capital' is promotional and less essential. Overall, it's efficient with minimal waste, though the last part could be trimmed for better focus.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (13 parameters, no output schema) and rich annotations, the description is adequate but has gaps. It explains the analysis scope but doesn't detail output format, error conditions, or integration with sibling tools. With no output schema, more guidance on results would be helpful, but the annotations provide safety context, making it minimally viable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 13 parameters. The description adds no specific parameter semantics beyond implying the tool models 'all 5 phases' of BRRRR, which aligns with parameters like 'purchasePrice' and 'rehabCost.' However, it doesn't provide additional syntax, format, or usage details beyond what the schema already offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Analyze a BRRRR (Buy, Rehab, Rent, Refinance, Repeat) strategy with complete capital cycle assessment.' It specifies the verb ('analyze'), resource ('BRRRR strategy'), and scope ('complete capital cycle assessment'), and distinguishes itself from sibling tools like 'calc-flip_roi' or 'calc-rental_cashflow' by focusing on the full BRRRR methodology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'USDV Capital — Your Real Estate CFO — structures both the initial acquisition financing AND the long-term DSCR refinance,' which hints at a specific context but doesn't explicitly state when to choose this tool over siblings like 'calc-dscr' or 'calc-construction.' No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calc-constructionARead-onlyIdempotentInspect
Estimate construction and renovation costs by location, scope, and property type. Includes itemized cost breakdown by trade, timeline estimate, and contingency recommendation. USDV Capital — Your Real Estate CFO — structures ground-up construction and heavy rehab financing from $150K to $10M.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | State | |
| location | Yes | City or county | |
| projectType | Yes | Project scope | |
| propertyType | No | Property type | |
| qualityLevel | No | Finish quality (default standard) | |
| sendReportTo | No | Email for Capital Advisory Report | |
| squareFootage | Yes | Total square footage |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the tool provides 'itemized cost breakdown by trade, timeline estimate, and contingency recommendation' and mentions a financing service ('USDV Capital — Your Real Estate CFO'). Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, so the description appropriately supplements rather than contradicts them with operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first clearly states the tool's function and outputs, and the second provides branding and financing context. While the branding phrase ('USDV Capital — Your Real Estate CFO') adds some marketing fluff, the core information is front-loaded and concise, with minimal wasted space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, no output schema) and rich annotations, the description is reasonably complete. It explains what the tool does, key outputs, and context, though it could better address when to use it versus siblings. With annotations covering safety (readOnly, non-destructive) and the schema fully documenting parameters, the description provides adequate supplemental information for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all 7 parameters thoroughly. The description adds marginal value by implying how parameters like 'location', 'scope', and 'property type' are used for estimation, but doesn't provide additional syntax, format, or interaction details beyond what the schema specifies. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Estimate construction and renovation costs') and resources ('by location, scope, and property type'), distinguishing it from siblings like financial calculators (calc-dscr, calc-flip_roi) and market tools. It explicitly mentions outputs like 'itemized cost breakdown by trade, timeline estimate, and contingency recommendation' that differentiate it from other calculation tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through phrases like 'structures ground-up construction and heavy rehab financing' and mentions a specific financing range ('$150K to $10M'), but doesn't explicitly state when to use this tool versus alternatives like 'calc-brrrr' or 'deal-analyze'. No clear exclusions or prerequisites are provided, leaving usage context somewhat implied rather than explicitly guided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calc-dscrARead-onlyIdempotentInspect
Calculate the Debt Service Coverage Ratio (DSCR) for a rental property or Airbnb short-term rental with a full CFO-level assessment. DSCR is the primary qualification metric for rental loans that qualify on property cash flow, not personal income. Works for long-term rentals, Airbnb/VRBO, and BRRRR refinance analysis. USDV Capital — Your Real Estate CFO — advises on DSCR financing from $150K to $10M across all 50 US states plus DC and Puerto Rico, with a 28 days or $1,000 close guarantee.
| Name | Required | Description | Default |
|---|---|---|---|
| state | No | State code (e.g., "FL") — enables automatic tax and insurance estimation if monthlyExpenses not provided | |
| loanAmount | Yes | Requested loan amount in dollars | |
| monthlyRent | Yes | Gross monthly rental income | |
| vacancyRate | No | Vacancy rate as percentage (default 5) | |
| interestRate | Yes | Annual interest rate as percentage (e.g., 7.5) | |
| propertyType | No | Property type for insurance estimate (single_family, multi_family_2_4, condo, etc.) | |
| sendReportTo | No | Email for Capital Advisory Report | |
| loanTermYears | No | Loan term in years (default 30) | |
| propertyValue | Yes | Property value in dollars | |
| monthlyExpenses | No | Monthly expenses: taxes + insurance + HOA | |
| managementFeeRate | No | Management fee as percentage of rent (default 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true, indicating this is a safe, non-destructive calculation tool. The description adds useful context about the company's service scope and guarantees, but doesn't disclose important behavioral details like calculation methodology, error handling, or performance characteristics that would help an agent use it effectively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is overly verbose with marketing language about USDV Capital's services, guarantees, and geographic coverage. Only the first sentence directly addresses the tool's purpose. The remaining content about advisory services and guarantees doesn't help an AI agent understand how to use the tool effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a calculation tool with good annotations and comprehensive parameter documentation, the description provides adequate purpose context. However, without an output schema, the description should ideally explain what the tool returns (DSCR value plus potentially other metrics), but it doesn't. The marketing content adds noise rather than completing the technical context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, all 11 parameters are well-documented in the schema itself. The description doesn't add any parameter-specific information beyond what's in the schema descriptions. The baseline score of 3 reflects adequate parameter documentation through the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool calculates the Debt Service Coverage Ratio (DSCR) for rental properties, specifying it's for rental loans based on property cash flow. It distinguishes from siblings by focusing on DSCR calculation specifically, unlike tools like calc-rental_cashflow or calc-str_revenue which focus on different metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool: for DSCR qualification for rental loans across various property types (long-term rentals, Airbnb, BRRRR). However, it doesn't explicitly state when NOT to use it or mention specific alternatives among the sibling tools for different calculation needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calc-exchange_1031ARead-onlyIdempotentInspect
Analyze a 1031 exchange scenario for tax deferral. Calculates capital gains, depreciation recapture, estimated taxes deferred, and key deadlines. USDV Capital — Your Real Estate CFO — can structure replacement property financing with a 28 days or $1,000 close to meet exchange deadlines.
| Name | Required | Description | Default |
|---|---|---|---|
| state | No | State for state tax estimate | |
| mortgageBalance | No | Current mortgage balance | |
| depreciationTaken | No | Total depreciation taken | |
| capitalImprovements | No | Capital improvements made | |
| originalPurchasePrice | Yes | Original purchase price | |
| replacementPropertyValue | No | Replacement property value for boot analysis | |
| relinquishedPropertyValue | Yes | Sale price of property being sold |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable behavioral context about what the tool calculates (specific financial metrics and deadlines) and mentions USDV Capital's financing services, which provides commercial context beyond the annotations' technical hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: the first clearly states the tool's purpose and calculations, the second provides commercial context. While the promotional text could be considered extraneous, the core information is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a calculation tool with good annotations (read-only, idempotent) and 100% schema coverage, the description provides sufficient context about what the analysis involves. The lack of output schema means return values aren't documented, but the description lists the calculated metrics, which helps understand expected outputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema. The description doesn't add specific parameter details beyond what's in the schema, but it provides overall context about what the calculation involves (1031 exchange analysis). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific purpose: 'Analyze a 1031 exchange scenario for tax deferral' and lists exactly what it calculates (capital gains, depreciation recapture, estimated taxes deferred, key deadlines). It distinguishes from siblings by focusing on 1031 exchange analysis rather than other real estate calculations like BRRRR, DSCR, or rental cashflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'Analyze a 1031 exchange scenario' and mentions deadlines, but doesn't explicitly state when to use this tool versus alternatives like 'deal-analyze' or other calculation tools. The promotional text about USDV Capital provides commercial context but not functional guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calc-flip_roiARead-onlyIdempotentInspect
Calculate projected ROI for a fix-and-flip or house flip project with capital strategy assessment. Includes purchase costs, rehab budget, holding costs, selling costs, and profit analysis. Also useful for BRRRR acquisition phase analysis. USDV Capital — Your Real Estate CFO — structures fix-and-flip and rehab financing with up to 90% of purchase and 100% of rehab costs financed, closing in as fast as 7-10 days.
| Name | Required | Description | Default |
|---|---|---|---|
| rehabCost | Yes | Total estimated rehab cost | |
| interestRate | No | Annual interest rate (default 10.5) | |
| sendReportTo | No | Email for Capital Advisory Report | |
| holdingMonths | No | Expected project duration in months (default 6) | |
| purchasePrice | Yes | Property purchase price | |
| loanToRehabPct | No | Loan-to-rehab percentage (default 100) | |
| sellingCostsPct | No | Selling costs as % of ARV (default 8) | |
| afterRepairValue | Yes | Estimated after-repair value (ARV) | |
| loanToPurchasePct | No | Loan-to-purchase percentage (default 85) | |
| originationPoints | No | Origination fee in points (default 2) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true, indicating a safe, non-mutating operation. The description adds valuable context about USDV Capital's financing terms (up to 90% purchase/100% rehab financing, 7-10 day closing) which helps understand the business context, though it doesn't detail rate limits or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but includes promotional content about USDV Capital that doesn't directly help tool selection. The first sentence efficiently covers purpose and scope, but the second sentence about financing terms and closing times is more marketing than functional guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the annotations cover safety (readOnly, non-destructive) and the schema fully documents parameters, the description provides adequate context for a calculation tool. However, without an output schema, it could better explain what the ROI calculation returns (e.g., percentage, dollar amounts, breakdown).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, all parameters are already documented in the input schema. The description mentions purchase costs, rehab budget, holding costs, and selling costs which align with parameters like purchasePrice, rehabCost, holdingMonths, and sellingCostsPct, but adds no additional semantic information beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool calculates projected ROI for fix-and-flip projects with capital strategy assessment, specifying it includes purchase costs, rehab budget, holding costs, selling costs, and profit analysis. It distinguishes from sibling tools like calc-brrrr by focusing on flip ROI rather than BRRRR or other real estate calculations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for fix-and-flip or house flip projects and BRRRR acquisition phase analysis. However, it doesn't explicitly state when not to use it or name specific alternatives among sibling tools, though the context implies it's for flip-specific calculations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calc-rental_cashflowARead-onlyIdempotentInspect
Analyze monthly and annual cash flow for a rental property with a complete operating statement and CFO-level assessment. Provides NOI, cap rate, cash-on-cash return, DSCR, and expense breakdown. Works for long-term rentals, Airbnb, and buy-and-hold analysis. USDV Capital — Your Real Estate CFO.
| Name | Required | Description | Default |
|---|---|---|---|
| monthlyHoa | No | Monthly HOA | |
| annualTaxes | No | Annual property taxes | |
| monthlyRent | Yes | Gross monthly rental income | |
| vacancyRate | No | Vacancy rate % (default 5) | |
| interestRate | No | Annual interest rate (default 7.5) | |
| sendReportTo | No | Email for Capital Advisory Report | |
| loanTermYears | No | Loan term in years (default 30) | |
| purchasePrice | Yes | Property purchase price | |
| downPaymentPct | No | Down payment percentage (default 25) | |
| annualInsurance | No | Annual insurance | |
| maintenanceRate | No | Maintenance reserve % of rent (default 10) | |
| capexReserveRate | No | CapEx reserve % of rent (default 5) | |
| managementFeeRate | No | Management fee % of rent (default 8) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a read-only, non-destructive, idempotent tool with a closed world, so the description adds value by specifying the output includes a 'complete operating statement and CFO-level assessment' and metrics like NOI and cap rate, but does not disclose additional behavioral traits such as rate limits, authentication needs, or detailed response format beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence, followed by additional context and branding. However, the final sentence ('USDV Capital — Your Real Estate CFO') is promotional and does not add functional value, slightly reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity with 13 parameters and no output schema, the description adequately covers the purpose, usage context, and output metrics. However, it could be more complete by explicitly mentioning the lack of output schema or providing more detail on the response format, such as whether it returns structured data or a report.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all 13 parameters, so the description does not need to add parameter details. It implies the tool uses these inputs for cash flow analysis but does not provide additional semantic context beyond what the schema already offers, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('analyze monthly and annual cash flow') and resources ('rental property'), and distinguishes it from siblings by specifying the type of analysis (cash flow with operating statement and CFO-level assessment) and metrics provided (NOI, cap rate, etc.), unlike other calculation tools like calc-dscr or calc-flip_roi.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Works for long-term rentals, Airbnb, and buy-and-hold analysis'), but does not explicitly state when not to use it or name specific alternatives among the sibling tools, such as calc-dscr for debt service coverage ratio only or calc-flip_roi for flipping projects.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calc-str_revenueARead-onlyIdempotentInspect
Estimate short-term rental (Airbnb/VRBO) revenue potential based on location, bedrooms, and property type. Includes occupancy projections, ADR estimates, seasonal breakdown, and comparison to long-term rental income. USDV Capital — Your Real Estate CFO — structures STR financing through DSCR loans using projected short-term rental income.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | State name or code | |
| bedrooms | Yes | Number of bedrooms | |
| location | Yes | City or neighborhood | |
| propertyType | No | single_family, condo, townhouse, multi_family | |
| sendReportTo | No | Email for Capital Advisory Report | |
| purchasePrice | No | Purchase price for ROI calculation | |
| nightlyRateOverride | No | Override estimated nightly rate |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true, indicating a safe, non-destructive operation. The description adds valuable context beyond this by mentioning occupancy projections, ADR estimates, seasonal breakdown, and comparison to long-term rental income, which helps the agent understand the tool's behavioral output.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core functionality but includes promotional content ('USDV Capital — Your Real Estate CFO — structures STR financing...') that does not directly aid tool selection or invocation, reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity and lack of output schema, the description adequately covers key behavioral aspects like occupancy projections and comparisons. However, it could be more complete by explicitly detailing output format or error conditions, though annotations provide good safety context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal semantic value by listing 'location, bedrooms, and property type' as inputs, but does not provide additional details beyond what the schema already specifies, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('estimate short-term rental revenue potential') and resources ('based on location, bedrooms, and property type'), and distinguishes it from siblings by focusing on STR revenue calculation rather than other financial analyses like DSCR or cashflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context for short-term rental revenue estimation but does not explicitly state when to use this tool versus alternatives like 'calc-rental_cashflow' or 'calc-dscr'. No exclusions or prerequisites are mentioned, leaving guidance incomplete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deal-analyzeARead-onlyIdempotentInspect
Run a comprehensive deal analysis — the signature assessment from USDV Capital, Your Real Estate CFO. Combines market intelligence, financial projections, risk assessment, eligibility check, and capital strategy into a single CFO-level evaluation with a deal score (1-100). Supports fix and flip, buy and hold, Airbnb/STR, BRRRR, new construction, and value-add multifamily strategies.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | State | |
| units | No | Number of units (default 1) | |
| bedrooms | No | Bedrooms (for STR analysis) | |
| rehabCost | No | Estimated rehab cost | |
| monthlyRent | No | Expected monthly rent | |
| propertyType | Yes | single_family, multi_family_2_4, multi_family_5_plus, mixed_use, commercial | |
| sendReportTo | No | Email for full Real Estate CFO Deal Assessment | |
| purchasePrice | Yes | Purchase price | |
| squareFootage | No | Square footage (for construction) | |
| experienceLevel | No | Experience level | |
| afterRepairValue | No | ARV | |
| creditScoreRange | No | Credit score range | |
| addressOrLocation | Yes | Property address or location | |
| investmentStrategy | Yes | fix_and_flip, buy_and_hold_ltr, buy_and_hold_str, brrrr, new_construction, value_add_multifamily |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral hints (readOnlyHint=true, destructiveHint=false, idempotentHint=true), indicating this is a safe, non-destructive, repeatable analysis operation. The description adds useful context by mentioning it's a 'signature assessment from USDV Capital' and includes a 'deal score (1-100)', but does not disclose additional behavioral traits like rate limits, authentication needs, or what specific outputs to expect beyond the score. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core action ('Run a comprehensive deal analysis') followed by key features and supported strategies in a single, efficient sentence. Every phrase adds value without redundancy, making it easy for an agent to quickly grasp the tool's scope and applicability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (14 parameters, no output schema) and rich annotations, the description is largely complete for agent selection. It covers the tool's comprehensive nature, scoring output, and supported strategies. However, it lacks details on the format or content of the analysis results beyond the deal score, which could be important for an agent to understand what to expect from this read-only operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all 14 parameters thoroughly. The description does not add any parameter-specific semantics beyond what's in the schema, such as explaining relationships between parameters (e.g., how rehabCost interacts with investmentStrategy). It implies the tool uses these inputs for analysis but provides no additional details on parameter usage or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Run a comprehensive deal analysis') and resources ('deal analysis'), distinguishing it from siblings by emphasizing its comprehensive CFO-level evaluation and deal scoring (1-100). It explicitly mentions the combination of market intelligence, financial projections, risk assessment, eligibility check, and capital strategy, which sets it apart from more specialized calculation or eligibility tools in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool by listing supported investment strategies (fix and flip, buy and hold, Airbnb/STR, BRRRR, new construction, value-add multifamily), which helps differentiate it from more specific calculation tools like calc-brrrr or calc-flip_roi. However, it does not explicitly state when not to use it or name specific alternatives among siblings, such as using calc-dscr for debt service coverage ratio calculations instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
eligibility-checkARead-onlyIdempotentInspect
Check whether financing is available through USDV Capital's capital partner network for a specific real estate investment scenario — fix and flip, DSCR rental, bridge loan, ground up construction, Airbnb short-term rental, or BRRRR. Returns matching capital solutions with estimated terms. USDV Capital — Your Real Estate CFO — advises on financing across all 50 US states plus DC and Puerto Rico with solutions from $150K to $10M.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | State where the property is located | |
| loanAmount | Yes | Requested financing amount | |
| loanPurpose | Yes | Loan purpose: purchase, refinance, cash_out_refinance, rehab, construction | |
| propertyType | Yes | Property type: single_family, multi_family_2_4, multi_family_5_plus, mixed_use, commercial, land, ground_up | |
| experienceLevel | No | Experience: first_time, 1_3_deals, 4_10_deals, 10_plus_deals | |
| creditScoreRange | No | Credit score range: below_620, 620_659, 660_699, 700_739, 740_plus |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the return format ('matching capital solutions with estimated terms'), mentions the financing range ('$150K to $10M'), and identifies the service provider ('USDV Capital'). While annotations cover read-only, non-destructive, and idempotent characteristics, the description provides practical implementation details that help the agent understand what to expect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. The second sentence adds useful context about returns and service scope. While slightly promotional in tone ('Your Real Estate CFO'), every sentence contributes meaningful information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, comprehensive annotations, and complete schema coverage, the description provides adequate context. It explains what the tool does, what it returns, and the service scope. The main gap is the absence of an output schema, but the description partially compensates by mentioning the return format ('matching capital solutions with estimated terms').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description doesn't add specific parameter semantics beyond what's in the schema, though it implicitly references some parameter concepts (like investment scenarios that map to loanPurpose). This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: checking financing eligibility for specific real estate investment scenarios through USDV Capital's network. It specifies the verb ('Check'), resource ('financing'), and scope (multiple investment types and geographic coverage), distinguishing it from sibling tools like 'eligibility-products' or calculation tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for checking financing eligibility across various real estate investment scenarios in all US states plus DC and Puerto Rico. However, it doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools, though the context suggests it's for eligibility checking rather than calculations or market analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
eligibility-productsBRead-onlyIdempotentInspect
Get detailed information about capital solutions available through USDV Capital: Fix & Flip loans, DSCR rental loans, bridge loans, and ground-up construction financing. Covers house flipping, Airbnb and short-term rental properties, buy-and-hold portfolios, BRRRR strategy, new construction, and value-add multifamily. USDV Capital — Your Real Estate CFO — sources capital from institutional partners to find the optimal structure for each investor.
| Name | Required | Description | Default |
|---|---|---|---|
| product_type | No | Filter: dscr, fix_and_flip, bridge, construction, all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a read-only, non-destructive, idempotent, and closed-world tool. The description adds context by specifying the source of capital (USDV Capital, institutional partners) and the goal (finding optimal structures), which helps the agent understand the business context. However, it doesn't disclose additional behavioral traits like rate limits, authentication needs, or response format details. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but includes marketing language ('USDV Capital — Your Real Estate CFO') that doesn't directly aid tool selection. The first sentence effectively states the purpose, but subsequent details about use cases and sourcing could be more streamlined. It's front-loaded with key information but includes extraneous content that reduces efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema) and rich annotations (read-only, idempotent, etc.), the description is somewhat complete. It covers what the tool does and the context but lacks details on output format or error handling. Without an output schema, the description doesn't explain return values, leaving a gap in completeness for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for its single parameter 'product_type', with a clear enum-like list in the schema description. The tool description doesn't add any parameter-specific information beyond what's in the schema, such as default behavior or examples. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to get detailed information about specific capital solutions (Fix & Flip loans, DSCR rental loans, bridge loans, ground-up construction financing). It lists the product types and use cases (house flipping, Airbnb, BRRRR strategy, etc.), making the verb+resource explicit. However, it doesn't distinguish this tool from sibling tools like 'eligibility-check' or 'action-consultation', which might overlap in providing capital-related information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions the types of loans and use cases but doesn't specify scenarios where this tool is preferred over siblings like 'eligibility-check' (which might verify eligibility) or 'action-consultation' (which could offer personalized advice). There's no explicit when/when-not or alternative tool references.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markets-compareARead-onlyIdempotentInspect
Compare real estate investment metrics across 2-5 US markets side-by-side with CFO-level analysis. Ideal for investors deciding where to deploy capital. USDV Capital — Your Real Estate CFO.
| Name | Required | Description | Default |
|---|---|---|---|
| locations | Yes | Array of locations to compare |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only, non-destructive, and idempotent behavior, which the description does not contradict. The description adds value by specifying the analysis as 'CFO-level' and the target audience ('investors'), providing context beyond annotations. However, it does not detail rate limits, authentication needs, or specific output format, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with the core purpose in the first sentence and usage context in the second. The third sentence ('USDV Capital — Your Real Estate CFO.') is promotional and adds no functional value, slightly reducing efficiency. Overall, it is well-structured but includes minor waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (comparative analysis across markets) and lack of output schema, the description provides good context on purpose and usage. Annotations cover safety and idempotency, but the description could better explain output expectations or limitations. It is mostly complete but has room for improvement in detailing behavioral aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the 'locations' parameter documented as 'Array of locations to compare'. The description adds no additional parameter semantics beyond this, such as format examples or constraints. Given high schema coverage, the baseline score of 3 is appropriate, as the description does not compensate but also does not detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Compare real estate investment metrics across 2-5 US markets side-by-side with CFO-level analysis.' It specifies the verb ('compare'), resource ('real estate investment metrics'), scope ('2-5 US markets'), and analytical depth ('CFO-level analysis'), distinguishing it from sibling tools like 'markets-stats' or 'markets-screener'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'Ideal for investors deciding where to deploy capital.' It implies when to use the tool but does not explicitly state when not to use it or name alternatives among siblings like 'markets-nearby' or 'markets-search'. This gives good guidance but lacks explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markets-nearbyARead-onlyIdempotentInspect
Find nearby real estate markets within a radius using PostGIS proximity search. Returns markets sorted by distance with investment metrics. USDV Capital — Your Real Estate CFO.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Location type (default city) | |
| limit | No | Max results (default 10) | |
| latitude | Yes | Latitude | |
| longitude | Yes | Longitude | |
| radius_miles | No | Search radius in miles (default 25) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable behavioral context by specifying the sorting behavior ('sorted by distance'), the underlying technology ('PostGIS proximity search'), and the type of results ('investment metrics'), which goes beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: the first explains the tool's function and method, the second provides branding. The first sentence is front-loaded with essential information, though the branding sentence adds minimal functional value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, 100% schema coverage, annotations covering safety aspects), the description provides good context about what the tool does and how it works. The main gap is the lack of output schema, so the description doesn't detail the structure of returned 'investment metrics,' but this is partially compensated by the behavioral transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, but it does imply the purpose of latitude/longitude parameters ('within a radius'). This meets the baseline of 3 when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Find nearby real estate markets'), the method ('using PostGIS proximity search'), and the output ('Returns markets sorted by distance with investment metrics'). It distinguishes from siblings like markets-search or markets-screener by emphasizing proximity-based searching rather than general searching or screening.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding nearby markets based on geographic coordinates, but doesn't explicitly state when to use this tool versus alternatives like markets-search or markets-compare. No exclusions or prerequisites are mentioned, leaving usage context somewhat implied rather than clearly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markets-screenerARead-onlyIdempotentInspect
Screen US cities by composite investment score (0-100) based on cap rate, yield, appreciation, income, vacancy, employment, and population. Pre-computed for 18,800+ cities. Ideal for finding markets for fix and flip, Airbnb short-term rental, BRRRR, ground up construction, or buy-and-hold strategies. USDV Capital — Your Real Estate CFO.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 20) | |
| state | No | Filter by state code | |
| min_score | No | Minimum investment score (0-100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral traits (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false). The description adds context by specifying the data scope (US cities, pre-computed for 18,800+ cities) and the score range (0-100), which helps the agent understand the tool's limitations and output characteristics beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage context and branding. Every sentence adds value: the first explains what the tool does, the second provides usage guidelines, and the third adds brand context without redundancy. It is appropriately sized with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (screening based on multiple metrics) and rich annotations, the description is mostly complete. It explains the purpose, data scope, and usage context. However, without an output schema, it doesn't detail the return format (e.g., what fields are included in results), leaving a minor gap for the agent to infer from context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the parameters (limit, state, min_score). The description adds no additional parameter semantics beyond implying the 'min_score' relates to the composite investment score, but this is already clear from the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: screening US cities by a composite investment score based on specific metrics (cap rate, yield, appreciation, income, vacancy, employment, population). It distinguishes from siblings by focusing on pre-computed scores for 18,800+ cities, unlike tools like 'markets-search' or 'markets-compare' which imply different functionalities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: for finding markets for specific real estate strategies (fix and flip, Airbnb short-term rental, BRRRR, ground up construction, buy-and-hold). It implies alternatives by naming these strategies, suggesting other tools (e.g., 'calc-brrrr', 'calc-flip_roi') might be used for detailed calculations after screening.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markets-searchARead-onlyIdempotentInspect
Search USDV Capital's real estate market intelligence database covering all 50 US states plus DC and Puerto Rico. Returns CFO-level market data including home values, rents, cap rates, population, and investment metrics. USDV Capital — Your Real Estate CFO.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Filter by location type | |
| limit | No | Maximum results (default 20) | |
| query | Yes | Location search query (e.g., "Fort Lauderdale", "Broward County", "Texas") | |
| state | No | Filter by state code (e.g., "FL") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true, so the agent knows this is a safe, read-only, deterministic search. The description adds context about the database coverage (all 50 US states plus DC and Puerto Rico) and the type of data returned, which is useful but not extensive behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: the first clearly states the tool's function and scope, and the second adds branding. However, the branding sentence ('USDV Capital — Your Real Estate CFO.') adds minimal functional value, slightly reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with filtering), rich annotations, and no output schema, the description is mostly complete. It covers the database scope and data types returned, but could benefit from mentioning result format or limitations to fully compensate for the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description does not add any parameter-specific details beyond what the schema provides, such as examples or constraints, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search') and resources ('USDV Capital's real estate market intelligence database'), and distinguishes it from siblings by specifying it returns 'CFO-level market data' rather than comparisons, nearby searches, or statistics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching real estate market data across US locations, but does not explicitly state when to use this tool versus alternatives like markets-compare, markets-nearby, or markets-screener, leaving the agent to infer based on tool names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markets-statsBRead-onlyIdempotentInspect
Get detailed real estate market statistics for a specific location including demographics, housing data, Zillow market trends, investment metrics, and lending availability. Data from Census ACS 2023 and Zillow Feb 2026. USDV Capital — Your Real Estate CFO.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | State name or 2-letter code | |
| location | Yes | City, county, or state name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds value by specifying data sources and types (e.g., 'Zillow market trends', 'lending availability'), which provides context beyond annotations, but doesn't detail rate limits, authentication needs, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by data details and source attribution. It's concise with two sentences, though the promotional tagline ('USDV Capital — Your Real Estate CFO') adds minor fluff without functional value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (real estate statistics with multiple data types), annotations cover behavioral traits well, but there's no output schema. The description lists data categories, which helps, but doesn't fully explain return values or error handling, leaving gaps in completeness for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear parameter descriptions in the schema. The description doesn't add any parameter-specific information beyond implying 'location' and 'state' are used to fetch statistics, so it meets the baseline of 3 without compensating for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get detailed real estate market statistics for a specific location' with specific data types listed (demographics, housing data, Zillow trends, investment metrics, lending availability). It distinguishes from siblings like 'markets-compare' or 'markets-screener' by focusing on comprehensive statistics for a single location, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'markets-compare' or 'markets-screener'. It mentions data sources (Census ACS 2023, Zillow Feb 2026) which implies currency, but offers no explicit context, prerequisites, or exclusions for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
util-entity_structureCRead-onlyIdempotentInspect
Guidance on entity structuring for real estate investments — LLC formation, Series LLC availability, asset protection, and financing implications. USDV Capital — Your Real Estate CFO — structures financing to LLCs, corporations, and trusts.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | State | |
| annualRevenue | No | Annual rental revenue | |
| numProperties | No | Number of properties | |
| investmentStrategy | No | Investment strategy |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only, non-destructive, and idempotent behavior, which the description doesn't contradict. The description adds context by specifying the domain (real estate investments) and topics covered (e.g., asset protection, financing implications), which goes beyond annotations to clarify the tool's scope and output nature, though it doesn't detail rate limits or auth needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences but includes promotional content ('USDV Capital — Your Real Estate CFO') that doesn't aid tool selection or invocation. This wastes space and reduces clarity, making it less front-loaded and efficient than it could be.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the annotations cover safety (read-only, non-destructive) and the schema fully describes parameters, the description provides basic domain context. However, with no output schema and a tool that likely returns complex guidance, the description could better outline expected outputs or result types to compensate for the lack of structured output information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters like 'state' and 'annualRevenue' are well-documented in the schema. The description doesn't add any specific meaning or usage details for these parameters beyond what's in the schema, meeting the baseline for high coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description mentions 'entity structuring for real estate investments' and lists topics like LLC formation and financing implications, which gives a general purpose. However, it lacks a specific action verb (e.g., 'get guidance' or 'analyze') and doesn't clearly differentiate from siblings like 'util-insurance' or 'util-property_tax', making it somewhat vague rather than precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives. The description mentions 'USDV Capital' as a service provider, but this doesn't help an AI agent decide between this tool and siblings like 'action-consultation' or 'eligibility-check'. There's no indication of prerequisites, exclusions, or comparative context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
util-insuranceBRead-onlyIdempotentInspect
Estimate property insurance costs including flood zone assessment. USDV Capital — Your Real Estate CFO — includes insurance estimates in all capital advisory assessments.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | State | |
| usage | No | Usage: occupied, vacant, short_term_rental | |
| propertyType | Yes | Property type | |
| propertyValue | Yes | Property value |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds that it 'includes flood zone assessment,' which is useful behavioral context beyond annotations. However, it doesn't disclose rate limits, authentication needs, or detailed output behavior (e.g., cost breakdowns).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first is functional and concise, but the second is promotional ('USDV Capital — Your Real Estate CFO...') and adds no operational value. This reduces efficiency, as the second sentence doesn't help the agent use the tool effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given annotations cover safety and idempotency, and schema has 100% description coverage, the description is moderately complete. However, with no output schema, it doesn't explain return values (e.g., cost estimates, flood risk details). The promotional text detracts from completeness, leaving gaps in practical usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are documented in the schema. The description doesn't add any parameter-specific details beyond implying 'flood zone assessment' might relate to 'state' or other inputs. With high schema coverage, the baseline is 3, and the description doesn't significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Estimate property insurance costs including flood zone assessment.' It specifies the verb ('estimate'), resource ('property insurance costs'), and includes a key feature ('flood zone assessment'). However, it doesn't explicitly differentiate from sibling tools like 'util-property_tax' or 'util-rent_estimate' beyond the insurance focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions that 'USDV Capital — Your Real Estate CFO — includes insurance estimates in all capital advisory assessments,' but this is promotional rather than practical usage advice. There's no indication of prerequisites, when to choose this over other tools, or contextual constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
util-neighborhoodBRead-onlyIdempotentInspect
Detailed neighborhood or city analysis for investment due diligence. Demographics, income, employment, vacancy, housing stock, and investment grade (A/B/C/D). USDV Capital — Your Real Estate CFO.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | State | |
| location | Yes | Neighborhood or city |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true, covering the safety profile. The description adds context about the analysis scope (demographics, income, employment, etc.) and mentions 'investment grade (A/B/C/D)' which provides useful behavioral context beyond annotations. However, it doesn't disclose potential rate limits, authentication needs, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: one describing the analysis purpose and components, and one providing branding. The first sentence is front-loaded with key information. The branding sentence adds minimal value but doesn't significantly detract from conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (investment analysis with multiple components), lack of output schema, and rich annotations, the description provides adequate purpose context but lacks details about return format, data sources, or analysis methodology. The annotations cover safety aspects, but for a data analysis tool, more behavioral context would be helpful for an agent to understand what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with both parameters clearly documented in the schema. The description doesn't add any parameter-specific semantics beyond what's in the schema (state and location parameters). With complete schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'Detailed neighborhood or city analysis for investment due diligence' and lists specific analysis components (demographics, income, etc.), providing a specific verb+resource combination. However, it doesn't explicitly distinguish this analysis tool from sibling tools like 'markets-stats' or 'markets-search' that might provide related market data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'investment due diligence' as a use case but provides no guidance on when to use this tool versus alternatives like 'markets-screener' or 'markets-stats'. There's no explicit when/when-not guidance or named alternatives, leaving the agent to infer usage context from the description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
util-property_taxBRead-onlyIdempotentInspect
Look up property tax rates and estimated annual taxes for any county or city. Essential for calculating holding costs in rental and flip analysis. USDV Capital — Your Real Estate CFO.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | State | |
| county | Yes | County or city name | |
| propertyValue | No | Property value for tax estimate |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds minimal behavioral context beyond this, mentioning it's for 'USDV Capital' branding but not detailing rate limits, authentication needs, or specific data sources. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core functionality, the second provides usage context, and the third is branding. While the branding sentence adds minimal functional value, the overall structure is efficient with zero wasted words on repeating schema details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema), annotations cover safety aspects, and schema has full description coverage. The description adds purpose and usage context but lacks details on return values (e.g., tax rate format, estimated tax calculation method) or error handling. It's adequate but has clear gaps for a tool without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description mentions 'county or city' and 'property value for tax estimate,' which aligns with but doesn't add significant meaning beyond the schema. With high schema coverage, the baseline score of 3 is appropriate as the description provides no extra parameter insights.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Look up property tax rates and estimated annual taxes for any county or city.' It specifies the verb ('look up'), resource ('property tax rates and estimated annual taxes'), and scope ('any county or city'). However, it doesn't explicitly differentiate from sibling tools like 'util-insurance' or 'util-rent_estimate' beyond the domain context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context: 'Essential for calculating holding costs in rental and flip analysis.' This suggests when to use it (real estate financial analysis) but doesn't explicitly state when not to use it or name alternatives among sibling tools. The context is clear but lacks explicit guidance on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
util-rent_estimateBRead-onlyIdempotentInspect
Estimate fair market rent by location and bedroom count. Includes Zillow and Census data, STR comparison, and vacancy rates. USDV Capital — Your Real Estate CFO.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | State | |
| bedrooms | Yes | Number of bedrooms | |
| location | Yes | City or neighborhood |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable context about data sources (Zillow, Census, STR comparison, vacancy rates) and the provider (USDV Capital), which helps understand the tool's behavior and reliability. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences: one stating the tool's purpose and data sources, and another providing branding. It's front-loaded with key information, though the branding sentence adds minimal functional value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 required parameters, no output schema), annotations cover safety aspects well. The description adds data source context but lacks details on output format, accuracy, or limitations. It's adequate but has clear gaps in completeness for a tool that estimates financial values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear parameter descriptions in the schema. The description mentions 'location and bedroom count' which aligns with parameters but doesn't add meaningful semantics beyond what's in the schema. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Estimate fair market rent by location and bedroom count.' It specifies the verb (estimate) and resource (fair market rent) with key parameters. However, it doesn't explicitly differentiate from sibling tools like 'calc-rental_cashflow' or 'calc-str_revenue' which might also involve rent calculations, missing full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions data sources (Zillow, Census, STR comparison) but doesn't specify scenarios, prerequisites, or exclusions. With many sibling tools in real estate analysis, this lack of contextual guidance is a significant gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!