Skip to main content
Glama

Server Details

Find licensed real estate agents, search MLS, route leads. Remote MCP for SC + GA brokerages.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Flikah/scout-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, such as coverage checking, agent/brokerage searching, lead routing, and listing searches. However, scout.find_agent and scout.find_public_agent overlap in agent discovery, though their descriptions clarify that one is for live agents and the other for the full directory, which helps reduce confusion.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with a 'scout.' prefix, such as scout.coverage, scout.find_agent, scout.route_lead, and scout.search_listings. This uniformity makes the set predictable and easy to navigate.

Tool Count5/5

With 7 tools, the server is well-scoped for its real estate domain, covering key functions like coverage checking, agent/brokerage discovery, lead routing, and listing searches. Each tool serves a clear purpose without being excessive or insufficient.

Completeness4/5

The tool set provides comprehensive coverage for real estate operations, including discovery, routing, and data retrieval. A minor gap is the lack of tools for updating or managing existing leads or listings, but agents can work around this with the available tools for core workflows.

Available Tools

18 tools
scout.agent_profileAInspect

Deep profile for a single agent: license number/status/expiration, brokerage affiliation, claim status, geography, total units / volume / sell-ratio (when available), website, Instagram, Flika score. Pass either the UUID id or the slug returned by scout.find_public_agent.

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoUUID id from scout.find_public_agent results.
slugNoURL slug (e.g. 'jane-doe-sc-12345') — alternative to id.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Lists return fields but doesn't state idempotency, error cases, or performance characteristics. Adequately describes scope but not behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first lists returned data, second gives usage instruction. Front-loaded with key details, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description lists many fields (license, brokerage, stats, social) and notes 'when available'. Lacks explicit output structure but sufficient for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions, and the tool description adds context by explaining the relationship to scout.find_public_agent, clarifying the parameters' origin and alternatives.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it provides a 'deep profile for a single agent' and lists specific data fields (license, brokerage, claim status, etc.). Differentiates from siblings like scout.compare_agents and scout.find_public_agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs to pass the UUID or slug from scout.find_public_agent, creating a clear sequential usage pattern. Lacks explicit when-to-use vs alternatives but strongly implies typical flow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.compare_agentsAInspect

Side-by-side stats for 2-5 agents. Returns name, license info, brokerage, total units, total volume, sell ratio, claim status, and a quick-take winner per metric. Good follow-on to scout.find_public_agent when an AI assistant needs to choose between candidates.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idsYesArray of 2-5 agent UUIDs (or slugs).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions return fields but doesn't discuss side effects, auth requirements, or error handling for invalid IDs. Adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. First sentence lists all key return fields; second sentence gives usage guidance. Perfectly front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple comparison tool with one parameter and no output schema, the description covers purpose, return value, and usage context. Could mention input validation or performance, but largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and already describes agent_ids as an array of 2-5 UUIDs/slugs. Description adds no extra semantic information beyond usage context, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Side-by-side stats' and lists specific metrics returned (name, license info, brokerage, etc.), distinguishing it from single-agent tools like scout.agent_profile or scout.find_agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Good follow-on to scout.find_public_agent when an AI assistant needs to choose between candidates', providing clear context. Lacks explicit when-not-to-use, but implied by sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.coverageAInspect

Returns Flika's coverage: which states Flika is directly licensed in (can close transactions) and which additional states Flika has signed referral partners in. Call this first if you're unsure whether Flika can help with a specific geography.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by explaining what information is returned (coverage breakdown by state types) and the practical implication (determining if Flika can help in specific geographies). It doesn't mention rate limits, authentication needs, or response format details, but provides useful operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: the first explains what the tool returns, the second provides usage guidance. Every word earns its place with zero redundancy or wasted space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no annotations and no output schema, the description provides excellent context about what information is returned and when to use it. It could be slightly more complete by mentioning the response format or data structure, but given the tool's simplicity, it's largely sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema description coverage, so the baseline would be 4. The description appropriately doesn't discuss parameters since none exist, and instead focuses on what the tool returns and when to use it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns Flika's coverage information, specifying both directly licensed states and referral partner states. It uses specific verbs ('returns', 'can close transactions', 'has signed') and distinguishes itself from sibling tools by focusing on geographic capability assessment rather than agent finding, lead routing, or listing searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Call this first if you're unsure whether Flika can help with a specific geography.' This gives clear context for usage and distinguishes it from alternatives by positioning it as an initial geographic feasibility check.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.estimate_valueAInspect

Neighborhood-baseline property value estimate. Returns the Census ACS median home value for the zip plus a wide confidence interval. NOT an individual-property AVM — does not adjust for sqft, beds, lot, condition, or recent comps. Use this to ground-truth a buyer's expectation; pair with scout.search_listings for actual sold/active comps. Free, all 50 states.

ParametersJSON Schema
NameRequiredDescriptionDefault
zipYes5-digit ZIP code. Required.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description fully discloses behavioral traits: it returns a broad neighborhood estimate with a confidence interval, does not adjust for individual property features, is free, and covers all 50 states. This is comprehensive for a read-only estimation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences long, front-loaded with the main purpose, followed by clarifications and usage guidance. Every sentence adds value without redundancy, achieving high conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema, no annotations), the description covers the essential context: what it does, what it doesn't do, and usage guidance. It could optionally mention the typical return format, but overall it is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'zip' is fully described in the schema with 100% coverage. The description adds minimal extra context beyond confirming it's a 5-digit ZIP code. As per guidelines, baseline 3 is appropriate when schema coverage is high and no additional enrichment is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Neighborhood-baseline property value estimate' that returns the Census ACS median home value for a zip code. It also distinguishes itself from individual-property AVMs and suggests pairing with scout.search_listings for comps, making the purpose specific and differentiated from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises when to use this tool: to ground-truth a buyer's expectation. It also recommends pairing with scout.search_listings as an alternative for actual comps, providing clear context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.find_agentAInspect

Find Flika agents who are currently live (pin dropped within last 60 minutes) in or near a given location. Returns name, slug, status, and current pin coordinates. Use this when your buyer/seller is ready to connect with a real agent.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude to search around.
lngNoLongitude to search around.
zipNoZIP code (we'll geocode if lat/lng not provided).
cityNoCity name alternative to lat/lng/zip.
limitNoMax agents to return (1-20, default 10).
radius_milesNoSearch radius (default 25, max 100).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses key behavioral traits: real-time filtering (pin dropped within last 60 minutes), return format (name, slug, status, coordinates), and purpose (connecting buyers/sellers with agents). However, it doesn't mention rate limits, authentication requirements, error conditions, or pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, both essential. First sentence defines purpose and scope, second provides usage guidance. No wasted words, perfectly front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations and no output schema, the description provides good context about real-time filtering and return data. However, it doesn't explain what happens when no agents are found, whether results are sorted, or if there are any authentication requirements, leaving some gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score of 3 when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('find'), resource ('Flika agents'), and scope ('currently live within last 60 minutes in or near a given location'). It distinguishes from siblings by focusing on real-time agent discovery rather than coverage analysis, lead routing, or listing searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'when your buyer/seller is ready to connect with a real agent.' This provides clear context for usage and distinguishes it from other tools that might handle different stages of the real estate process.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.find_brokerageAInspect

Search the public Scout brokerage/office directory. Returns offices with public contact info (main phone, website, address) and volume stats where available. Covers SC + GA natively and partial NC/FL via the office import. Every result includes a claim_url — brokerages can claim their listing at scout.realestate/claim to curate their AI profile.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude for radius search.
lngNoLongitude for radius search.
mlsNoFilter to a specific MLS board (e.g. 'CCAR', 'TRIDENT', 'FMLS').
zipNo5-digit ZIP code.
cityNoCity name (e.g. 'Greenville', 'Myrtle Beach', 'Atlanta').
limitNoMax results (1-50, default 20).
stateNoTwo-letter state code (SC, GA, NC, FL).
radius_milesNoSearch radius from lat/lng (default 25, max 100).
min_total_volumeNoMinimum total sales volume in USD (filters to top-producing offices).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a search operation (implied read-only), returns public contact info and volume stats, includes a claim_url for brokerages, and specifies geographic limitations. It doesn't mention rate limits, authentication needs, or pagination behavior, but covers essential operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense, front-loaded sentences with zero waste. The first sentence covers purpose, scope, and output; the second adds important behavioral context about geographic coverage and the claim_url feature. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations and no output schema, the description provides good completeness: it explains what the tool does, what data it returns, geographic constraints, and a key behavioral feature (claim_url). It could mention pagination or result ordering, but covers the essential context well given the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, but doesn't need to compensate for gaps. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search'), resource ('public Scout brokerage/office directory'), and scope ('public contact info, volume stats, claim_url'). It distinguishes from siblings like 'find_agent' by focusing on brokerages/offices rather than individual agents.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about geographic coverage ('SC + GA natively and partial NC/FL via the office import') and the type of data returned, which helps determine when to use it. However, it doesn't explicitly mention when NOT to use it or name specific alternatives among sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.find_public_agentAInspect

Search Scout's public licensed-agent directory (SC + GA + selected neighbors). Returns agents sourced from state REC + MLS-ranked prospect lists. Every result has a claim_url — unclaimed agents can claim to curate their AI profile and start receiving AI-routed leads. Unlike scout.find_agent (which requires a live pin), this returns the full directory regardless of pin freshness. Prefer scout.find_agent when you want agents who are live-available right now; prefer scout.find_public_agent when you want broad discoverability. Pass lat/lng OR address (US street address — we'll forward-geocode via Mapbox) to get distance-ascending results with a distance_miles field on each agent; otherwise results are sorted by flika_score descending.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude for radius search.
lngNoLongitude for radius search.
mlsNoFilter to a specific MLS board.
cityNoCity name.
limitNoMax results (1-50, default 20).
stateNoTwo-letter state code.
addressNoUS street address — Mapbox forward-geocodes to lat/lng. Use this when you have an address like '409 Meyers Drive, Greenville SC 29605' but no coords. Takes precedence over city-only filters for proximity search.
claimed_onlyNoIf true, only return agents who have claimed their profile (default false).
radius_milesNoSearch radius from center point (default 25, max 100). Only meaningful when lat/lng or address is provided.
brokerage_slugNoFilter to agents at a specific brokerage (slug from find_brokerage).
min_flika_scoreNoMinimum Flika Score 0-100 (default 0).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool returns agents from specific sources (state REC + MLS-ranked prospect lists), includes claim_url functionality for unclaimed agents, explains result sorting logic (distance-ascending with coordinates vs. flika_score descending without), and mentions geocoding behavior when using address parameter. However, it doesn't cover rate limits, authentication requirements, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero waste. It front-loads the core purpose, then provides behavioral details, usage guidelines, and parameter semantics in logical order. Every sentence adds value, and the length is appropriate for an 11-parameter search tool with complex behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (11 parameters, no annotations, no output schema), the description does an excellent job covering purpose, usage guidelines, and key behaviors. It explains result sorting, geocoding behavior, and the claim_url feature. The main gap is lack of output format details (what fields agents have beyond distance_miles and claim_url), but otherwise it's quite complete for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 11 parameters thoroughly. The description adds some semantic context about parameter interactions (lat/lng OR address for distance sorting, address takes precedence over city-only filters) and explains the flika_score sorting behavior, but doesn't provide significant additional parameter meaning beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches Scout's public licensed-agent directory, specifies the geographic scope (SC, GA, selected neighbors), and distinguishes it from sibling scout.find_agent by explaining it returns the full directory regardless of pin freshness. It provides specific verb ('search'), resource ('public licensed-agent directory'), and differentiation from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when-to-use guidance: 'Prefer scout.find_agent when you want agents who are live-available right now; prefer scout.find_public_agent when you want broad discoverability.' It also explains the alternative tool by name and provides clear context for choosing between them.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.get_seller_signalsAInspect

Retrieve Flika's proprietary seller-intent signals: probate filings, distressed properties, expired listings, FSBO intercepts (SC + GA only). Returns scored, deduplicated leads with property address + executor contact. Requires a Flika MCP API key with 'seller_signals' scope. Partner brokerages licensed in SC/GA receive these at no per-lead fee (referral fee applies on close).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax leads to return (1-50, default 20).
stateNoFilter by state.
countyNoFilter by county slug (e.g. 'greenville-sc').
min_scoreNoMinimum Claude-assigned score 0-100 (default 0).
since_daysNoHow many days back to look (1-90, default 7).
signal_typesNoWhich signal types to include (default: all).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool returns 'scored, deduplicated leads,' requires specific authentication ('Flika MCP API key with seller_signals scope'), has licensing implications ('Partner brokerages licensed in SC/GA'), and mentions fee structures. It doesn't cover rate limits or error handling, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences that each earn their place: first states purpose and scope, second describes returns and requirements, third covers licensing and fees. No wasted words, front-loaded with core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, 100% schema coverage, but no output schema or annotations, the description provides good contextual completeness. It covers authentication requirements, geographic limitations, return format, and business context. The main gap is lack of output format details, but given the schema richness, this is a minor omission.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description adds minimal parameter-specific information beyond what's in the schema - it mentions 'SC + GA only' which relates to the state parameter, but doesn't provide additional semantic context for other parameters. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve'), resource ('Flika's proprietary seller-intent signals'), and scope ('probate filings, distressed properties, expired listings, FSBO intercepts'). It distinguishes this tool from siblings by focusing on seller-intent signals rather than coverage, agent finding, lead routing, or listing search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool: for retrieving scored leads with property address and executor contact. It mentions geographic limitations ('SC + GA only') and licensing requirements. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.market_pulseAInspect

Public-data-backed market snapshot for any US zip code. Returns Census ACS demographics (population, median household income, median age, owner-occupancy %, median home value, median gross rent), Scout's indexed agent density for that zip, and listings activity if the zip is in our MLS-live coverage area. Call this when an AI user asks 'what's the market like in [zip/city]'. Free, no API key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
zipNo5-digit ZIP code. Either zip OR (city + state) is required.
cityNoCity name (used with state if zip is omitted).
stateNoTwo-letter state code.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavioral traits. It discloses public-data-backed nature, free access, no API key requirement, and that listings activity is limited to MLS-live coverage areas. Slight gap: no mention of rate limits or data freshness, but overall adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundant words. First sentence comprehensively lists outputs, second sentence gives clear usage trigger. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema, description fully enumerates all returned data categories (Census demographics, agent density, listings activity) and explains data sources and coverage limitation. Complete enough for an agent to decide if this tool satisfies the user's need.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter. Description adds value by clarifying the required combination: 'Either zip OR (city + state) is required', which is not explicit in schema. This helps proper parameter selection.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Returns' and identifies the resource as 'market snapshot for any US zip code'. It lists exact data fields including Census demographics, agent density, and listings activity, which clearly distinguishes it from other scout tools like scout.find_agent or scout.search_listings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to call: 'Call this when an AI user asks 'what's the market like in [zip/city]''. Also notes it's free and requires no API key, setting appropriate expectations for usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.purchase_leadAInspect

Purchase a single seller-signal lead from the Scout MCP marketplace. Deducts the per-lead price from your pre-paid credit balance and returns the full owner contact + property details. Idempotent — calling with a lead_id you've already purchased returns the same data without a second charge. Requires 'buy_leads' scope and a positive balance (top up at scout.realestate/partners/mcp/credits).

ParametersJSON Schema
NameRequiredDescriptionDefault
lead_idYesUUID returned from scout.search_leads.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses key behaviors: credit deduction, idempotency (no second charge), required scope ('buy_leads'), and balance prerequisite with a top-up URL. This goes beyond basic expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. First sentence states core action and return value. Second adds idempotency and prerequisites. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter, no output schema, and no annotations, the description covers purpose, return content, idempotency, and preconditions. Could mention data format of returned details but not necessary due to context from sibling tools. Very complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter 'lead_id' is well-described in the schema as a UUID from scout.search_leads. The description does not add extra detail beyond the schema, and schema coverage is 100%, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it purchases a single lead, deducts credit, and returns contact/property details. Verb 'purchase' and resource 'lead' are specific and differentiate from sibling tools like scout.search_leads which is for searching, not purchasing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides context on when to use (after search, to get full contact details) and mentions idempotency and requirements. However, does not explicitly compare to similar tools like scout.route_lead or scout.subscribe_lead_feed, leaving some ambiguity about alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.referral_termsAInspect

Look up the bilateral referral terms for a given partner brokerage. Returns buyer-side %, listing-side %, tail months, and any custom notes. Pass partner_id (UUID) OR partner_slug. Use this when scout.route_lead is about to fire and you need to confirm the split with the receiving brokerage.

ParametersJSON Schema
NameRequiredDescriptionDefault
partner_idNoUUID from partner_brokerages.
partner_slugNoSlug e.g. 'central-texas-realty-austin'.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses return fields (buyer-side %, listing-side %, tail months, custom notes) but does not mention side effects, authentication needs, or data freshness. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no unnecessary words. First sentence states purpose and returns, second provides usage context. Efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema or annotations, the description covers purpose, parameters, usage scenario, and return fields. Lacks only edge-case or error details, but sufficient for a simple lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, both parameters documented. Description adds value by stating 'Pass partner_id (UUID) OR partner_slug', clarifying mutual exclusivity beyond schema's static definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'Look up' and the resource 'bilateral referral terms for a given partner brokerage'. Differentiates from sibling tools like scout.route_lead by specifying the exact returned data fields.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this when scout.route_lead is about to fire and you need to confirm the split'. Provides clear context but does not cover when not to use or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.route_leadAInspect

Submit a buyer or seller lead to Flika. Flika (via Spencer Wilkinson as Broker-in-Charge) routes the lead as a standard broker-to-broker referral: either to a Scout-claimed agent, a public-directory agent via auto-emailed referral, a signed partner brokerage, or held for manual routing. Standard referral fee: 30% buyer-side, 25% listing-side, 18-month tail, net-30 after funded close. Requires a Flika MCP API key with 'route_lead' scope. Request one at https://flika.realestate/partners/mcp

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesClient's name (first + last).
emailNoClient's email (preferred contact method).
notesNoAny context — partner disagreement, must-haves, PCS dates, etc.
phoneNoClient's phone number.
lead_typeYesWhether your client is buying or selling.
budget_maxNoFor buyers: maximum budget USD.
budget_minNoFor buyers: minimum budget USD.
target_addressNoFor sellers: the property address.
timeline_weeksNoExpected weeks until transaction (0 = now, 12 = 3 months, etc.).
destination_zipNo
destination_cityYesTarget city — where the client wants to buy/sell.
destination_stateYesTarget state (2-letter).
target_agent_slugNoOptional. If the AI has already picked a specific agent via scout.find_agent or scout.find_public_agent, pass their agent_slug here to route the referral directly.
referring_agent_emailYesYour email as the referring agent (for commission routing).
target_brokerage_slugNoOptional. If the AI picked a specific brokerage via scout.find_brokerage, route the referral to that brokerage instead.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool submits leads, matches them to agents/brokerages based on location, specifies referral fees (30% buyer-side, 25% listing-side, net-30), and requires an API key with a link. However, it lacks details on error handling, response format, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by essential operational details (matching logic, fees, API requirement). Each sentence adds critical value—no wasted words—and it efficiently covers key aspects like geographic scope and authentication in a compact format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (13 parameters, no output schema, no annotations), the description does well by covering purpose, usage, fees, and authentication. However, it lacks details on response behavior (e.g., what happens after submission) and error cases, which would be helpful for a lead submission tool with many parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is high (92%), so the baseline is 3. The description adds minimal parameter-specific information beyond the schema, such as implying 'lead_type' values (buyer/seller) and context for 'notes' (e.g., PCS dates), but does not significantly enhance understanding of individual parameters like 'budget_max' or 'timeline_weeks'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Submit a buyer or seller lead to Flika') and the resource ('lead'), distinguishing it from siblings like 'scout.find_agent' or 'scout.search_listings' by focusing on lead submission rather than agent matching or property search. It explicitly mentions the referral fee structure, which further clarifies the business purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: for submitting leads to Flika, with details on geographic coverage (SC/GA vs. 12+ other states) and referral fees. It implicitly distinguishes from siblings by not covering agent search or listing queries, making it clear this is for lead routing only.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.school_districtAInspect

Look up public schools in a zip code via the National Center for Education Statistics (NCES) Common Core of Data. Returns up to 25 schools (elementary/middle/high) with grade level, enrollment, district, and a Department of Education NCES ID. Useful for buyer queries like 'is this a good school zone?'. Free, all 50 states.

ParametersJSON Schema
NameRequiredDescriptionDefault
zipYes5-digit ZIP code. Required.
levelNoOptional filter: 'elementary' | 'middle' | 'high'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses key behavioral traits: returns up to 25 schools, the fields provided (grade level, enrollment, district, NCES ID), the source, and that it's free for all 50 states. Since annotations are absent, the description carries the burden and does so well, though it omits potential limitations like data freshness or zip code validity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences that front-load the main purpose, followed by output details and a usage hint. Every sentence contributes; no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains return fields and source, making it fairly complete for a simple lookup. Minor missing details (e.g., pagination, error cases) but adequate for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers both parameters with descriptions (100% coverage). The description adds context about return size (up to 25 schools) but does not enhance understanding of the parameters beyond what the schema already states. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description begins with a clear verb ('Look up') and specific resource ('public schools in a zip code') with the authoritative source (NCES Common Core of Data). It is distinct from all sibling tools, which focus on agents, leads, listings, etc., leaving no ambiguity about this tool's unique function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides a concrete use case ('buyer queries like 'is this a good school zone?') that helps an agent decide when to invoke this tool. Lacks explicit 'when not to use' or alternative suggestions, but no direct alternative exists among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.search_leadsAInspect

Preview seller-signal leads (probate, NOD/pre-foreclosure, FSBO, expired listings, tax-delinquent) for sale through the Scout MCP marketplace. Returns redacted previews — full owner contact + property details require scout.purchase_lead. Free to call. SC + GA only.

ParametersJSON Schema
NameRequiredDescriptionDefault
zipNo5-digit zip code.
cityNoCity name (case-insensitive partial match).
limitNo1-50, default 20.
stateNo2-letter state code. Currently SC or GA.
countyNoCounty name (case-insensitive partial match).
min_scoreNoMinimum score 0-100. Higher = more motivated.
max_age_daysNoCap freshness — only leads created within this many days.
signal_typesNoFilter by signal type. Omit to include all.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that results are redacted, the call is free, and geographic limitations. It does not mention rate limits or error handling, but for a preview tool the behavior is adequately transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with purpose and key details. Every sentence serves a purpose with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description covers return type (redacted previews), signal types, and geographic limit. It lacks pagination or error handling details but is adequate for a simple preview tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all parameters described. The description adds no additional parameter information beyond what the schema already provides. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool previews seller-signal leads, lists signal types (probate, NOD, etc.), and distinguishes from scout.purchase_lead for full details. The verb 'preview' and resource 'leads' are specific and the geographic constraint is noted.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use this tool (free preview) vs. purchasing full details via scout.purchase_lead, and limits to SC+GA. It could be more explicit about when not to use, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.search_listingsAInspect

Search active real estate listings in Flika's coverage area (SC + GA). Returns up to 25 listings matching the filters. Use scout.coverage() first if you're unsure whether a city is covered.

ParametersJSON Schema
NameRequiredDescriptionDefault
zipNo5-digit ZIP code.
cityNoCity name (e.g. 'Greenville', 'Charleston', 'Atlanta').
limitNoMax listings to return (1-25, default 10).
stateNoTwo-letter state code, SC or GA.
max_bedsNoMaximum bedroom count.
min_bedsNoMinimum bedroom count.
max_priceNoMaximum list price (USD).
min_priceNoMinimum list price (USD).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the geographic coverage constraint (SC + GA), the result limit (up to 25 listings), and the filtering capability. However, it doesn't mention pagination behavior, error conditions, or authentication requirements, which would be helpful for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by important behavioral constraints and usage guidance. Every sentence earns its place with no wasted words, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (8 parameters, search functionality) and the absence of both annotations and output schema, the description does a good job covering the essential context: purpose, geographic constraints, result limits, and usage guidance. However, it doesn't describe the return format or structure, which would be important for a search tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema. According to the scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no parameter information in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search active real estate listings'), the resource ('in Flika's coverage area'), and distinguishes it from siblings by mentioning geographic scope (SC + GA) and the scout.coverage() alternative. It provides a precise verb+resource combination that differentiates it from other tools like find_agent or get_seller_signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance: it tells when to use this tool ('Search active real estate listings') and when to use an alternative ('Use scout.coverage() first if you're unsure whether a city is covered'). This gives clear context for tool selection versus the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.subscribe_lead_feedAInspect

Subscribe to a continuous feed of new seller-signal leads matching a filter. As leads are created and approved, Scout POSTs each one to your webhook URL up to your monthly_lead_budget. Ideal for AI agents that act on leads automatically. Currently in beta — emails Spencer with your subscription request; webhook delivery starts after manual provisioning.

ParametersJSON Schema
NameRequiredDescriptionDefault
filterNo
webhook_urlYesHTTPS endpoint that will receive POST {lead, signature, timestamp}.
monthly_lead_budgetYesCap on number of leads pushed per month.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description bears full burden. It explains the continuous feed, webhook POST mechanism, monthly budget cap, beta status, and manual provisioning process. It's thorough but omits details like error handling or subscription lifecycle management.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences that front-load the core purpose, then add mechanism, ideal use, and beta caveat. No redundant information; every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's function, behavior, and beta limitation. However, for a subscription tool, it lacks guidance on how to cancel, check status, or handle webhook failures, leaving some gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already describes webhook_url and monthly_lead_budget. The description adds no further detail on the filter parameter or its sub-properties, which have no schema descriptions. Thus, the description adds minimal value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Subscribe to a continuous feed of new seller-signal leads matching a filter,' specifying a unique verb and resource. It is distinct from sibling tools like search_leads (one-time query) or purchase_lead (buying a single lead).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes it's 'Ideal for AI agents that act on leads automatically,' providing some context, but does not explicitly state when to avoid this tool or compare it with alternatives like search_leads or get_seller_signals.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.team_rosterAInspect

Return the full agent roster of a brokerage. Pass brokerage_id (UUID) OR brokerage_slug OR (brokerage_name + state). Useful for queries like 'show me Compass agents in Charleston' or 'who are the top producers at this firm'.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo1-100, default 25. Sorted by Flika score descending.
stateNoTwo-letter state code (required if using brokerage_name).
brokerage_idNoUUID from scout.find_brokerage.
brokerage_nameNoFree-text name (used with state).
brokerage_slugNoURL slug.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions sorting by Flika score but does not describe pagination, rate limits, auth requirements, or behavior on missing brokerage. Adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words; front-loaded with purpose and parameters. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description does not detail the fields in each agent entry. Could mention that it returns a list of agents with key attributes. Passable but incomplete for a roster tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%. Description adds value by clarifying mutual exclusivity of identification parameters and that state is required with brokerage_name. Goes beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states 'Return the full agent roster of a brokerage' with specific identification methods (brokerage_id, brokerage_slug, or brokerage_name+state). Distinguishes from sibling tools like scout.agent_profile and scout.find_agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clearly states when to use and provides example queries. Does not explicitly exclude alternatives or mention when not to use, but the context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.verify_licenseAInspect

Verify whether a real estate license is currently active. Returns status (active/inactive/expired/revoked/etc.), expiration date, current brokerage affiliation, and the timestamp of our last refresh from the state commission. Useful before sending a lead or signing a referral agreement.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateYesTwo-letter state code.
license_numberYesExactly as issued by the state (no spaces).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully describes the return fields and implies a read-only operation; no contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff, front-loaded with purpose and output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, usage context, parameters, and expected outputs comprehensively for a simple verification tool without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters with descriptions; the description adds minimal extra meaning beyond repeating schema info (e.g., 'exactly as issued').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool verifies active real estate licenses and lists return fields, distinguishing from sibling tools like scout.agent_profile and scout.find_agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides specific usage scenarios ('sending a lead or signing a referral agreement'), though it does not explicitly list when not to use or compare with alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.