mcp-server
Server Details
Browse property verification missions. Connect with Scouts for GPS-verified tours.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 5 of 5 tools scored.
The tools are mostly distinct with clear purposes: get_mission retrieves a specific mission by ID, get_mission_stats provides aggregate statistics, get_nearby_missions finds missions by location, list_missions offers paginated listing with filtering, and search_missions searches by keyword. However, list_missions and search_missions could be slightly confused as both involve finding missions, but their descriptions clarify that list_missions is for browsing with filters while search_missions is for keyword-based queries.
All tool names follow a consistent verb_noun pattern using snake_case, with verbs like 'get', 'list', and 'search' clearly indicating actions. There are no deviations in naming conventions, making the set predictable and easy to understand at a glance.
With 5 tools, this server is well-scoped for its purpose of managing property verification missions. Each tool serves a distinct and necessary function, such as retrieving details, statistics, location-based queries, listing, and searching, without being overly sparse or bloated.
The tool set covers read operations comprehensively, including retrieval, listing, searching, and statistics. However, there are notable gaps in CRUD/lifecycle coverage, as it lacks tools for creating, updating, or deleting missions, which are essential for full mission management in a platform like HomeVisto.
Available Tools
5 toolsget_missionARead-onlyIdempotentInspect
Get detailed information about a specific property verification mission by its ID. Returns mission details including property address, bounty amount, status, checklist items, and viewer/scout information.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The mission UUID to retrieve. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, indicating safe, repeatable read operations. The description adds valuable context by specifying the detailed return fields (property address, bounty amount, status, etc.), which helps the agent understand what information to expect beyond just the schema. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core purpose and efficiently lists the return details. Every element serves a purpose—no wasted words or redundancy. It's appropriately sized for a simple lookup tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema), the description is largely complete. It clearly states the purpose and return fields. However, without an output schema, it could benefit from more detail on response format (e.g., structure of checklist items). The annotations cover safety, but some behavioral aspects like error handling are unspecified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'id' fully documented as 'The mission UUID to retrieve.' The description adds no additional parameter semantics beyond this, but it doesn't need to since the schema is comprehensive. Baseline 3 is appropriate when the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get detailed information') and resource ('property verification mission by its ID'), distinguishing it from siblings like 'get_mission_stats' (statistics), 'get_nearby_missions' (geographic), 'list_missions' (multiple), and 'search_missions' (filtered search). It specifies the exact scope of information returned (property address, bounty amount, status, checklist items, viewer/scout information).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when detailed information about a specific mission is needed, but it doesn't explicitly state when to use this tool versus alternatives like 'list_missions' for overviews or 'search_missions' for filtered queries. No exclusions or prerequisites are mentioned, leaving some ambiguity about optimal use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_mission_statsARead-onlyIdempotentInspect
Get aggregate statistics about missions on the HomeVisto platform. Returns total counts, status breakdown, and average bounty information. Useful for understanding platform activity.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already provide readOnlyHint=true and idempotentHint=true, which the description doesn't contradict. The description adds useful context about what statistics are returned (counts, status breakdown, bounty averages) which helps understand the tool's behavior beyond the safety profile indicated by annotations. However, it doesn't mention potential limitations like data freshness, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and well-structured: two sentences that each earn their place. The first sentence states the core functionality and return values, while the second provides usage context. No wasted words, and the most important information (what statistics are returned) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters, good annotations (readOnly, idempotent), and no output schema, the description provides sufficient context for understanding what the tool does and when to use it. The main gap is the lack of output schema, but the description compensates by specifying what statistics are returned. For a simple statistics retrieval tool, this is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately doesn't discuss parameters since there are none, and the schema already fully documents the empty parameter structure. The description focuses correctly on what the tool does rather than parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get aggregate statistics about missions' with specific details about what it returns ('total counts, status breakdown, and average bounty information'). It distinguishes from siblings by focusing on statistics rather than individual missions or listings. However, it doesn't explicitly contrast with specific sibling tools like 'list_missions' which might also provide some statistical information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance: 'Useful for understanding platform activity' suggests this tool should be used for analytical purposes rather than operational tasks. However, it doesn't explicitly state when to use this tool versus alternatives like 'list_missions' or 'search_missions' which might serve similar analytical needs. No explicit when-not-to-use guidance or named alternatives are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_nearby_missionsBRead-onlyIdempotentInspect
Find property verification missions within a geographic radius. Useful for scouts looking for missions near their location or for finding missions in a specific area.
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Filter by mission status. Default: OPEN. | |
| latitude | Yes | Latitude of the center point (-90 to 90). | |
| radiusKm | No | Search radius in kilometers. Default: 10km, Maximum: 100km. | |
| longitude | Yes | Longitude of the center point (-180 to 180). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and idempotentHint=true, so the agent knows this is a safe, repeatable read operation. The description adds value by specifying the geographic radius aspect and the target audience (scouts), but it doesn't disclose additional behavioral traits like rate limits, authentication needs, or what data is returned. With annotations covering safety, this is adequate but not rich in extra context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured with two sentences: the first states the core purpose, and the second provides usage context. There's no wasted verbiage, and it's front-loaded with the main functionality. It could be slightly more detailed for better differentiation, but it's efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, 100% schema coverage, read-only operation with annotations), the description is somewhat complete but has gaps. It lacks details on output format (no output schema provided), and while it hints at usage, it doesn't fully clarify distinctions from sibling tools. For a read tool with good annotations, it's minimally adequate but could be more comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters (latitude, longitude, radiusKm, status). The description doesn't add any parameter-specific semantics beyond what's in the schema, such as explaining default values or constraints in more detail. Baseline score of 3 is appropriate since the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find property verification missions within a geographic radius.' It specifies the resource (property verification missions) and the action (find within radius), which is specific. However, it doesn't explicitly distinguish this from sibling tools like 'search_missions' or 'list_missions' beyond mentioning geographic radius, leaving some ambiguity about when to use this versus other mission-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: 'Useful for scouts looking for missions near their location or for finding missions in a specific area.' This implies when to use it (for geographic proximity searches), but it doesn't explicitly state when not to use it or name alternatives among the sibling tools. The guidance is helpful but lacks clear differentiation from other mission tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_missionsARead-onlyIdempotentInspect
List property verification missions with optional filtering. Returns paginated results sorted by creation date (newest first). Use this to browse available missions on the HomeVisto platform.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of missions to return. Default: 20, Maximum: 100. | |
| offset | No | Number of missions to skip for pagination. Default: 0. | |
| status | No | Filter by mission status. OPEN = available for scouts, COMPLETED = finished missions. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, indicating safe, repeatable operations. The description adds valuable behavioral context: it specifies that results are 'paginated' and 'sorted by creation date (newest first),' which aren't covered by annotations. However, it doesn't mention rate limits, authentication needs, or error handling, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured with two sentences. The first sentence states the purpose and key behavior (filtering, pagination, sorting). The second sentence provides usage guidance. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (list with filtering and pagination), annotations covering safety, and 100% schema coverage, the description is mostly complete. It explains the core behavior and usage context. However, without an output schema, it doesn't detail return values (e.g., mission fields), and it lacks error handling or authentication notes, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear documentation for all three parameters (limit, offset, status). The description adds minimal value beyond the schema by mentioning 'optional filtering' and implying pagination through 'Returns paginated results,' but doesn't provide additional semantic details about parameters. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List property verification missions with optional filtering.' It specifies the resource (property verification missions) and the action (list with filtering). However, it doesn't explicitly differentiate from sibling tools like 'search_missions' or 'get_nearby_missions,' which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: 'Use this to browse available missions on the HomeVisto platform.' This implies a browsing use case but doesn't explicitly state when to choose this tool over alternatives like 'search_missions' or 'get_nearby_missions.' No exclusions or clear alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_missionsARead-onlyIdempotentInspect
Search for missions by keyword. Searches across mission title, description, and property address fields. Useful for finding missions in specific locations or with specific requirements.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results to return. Default: 20, Maximum: 100. | |
| query | Yes | Search query to match against title, description, or address. Example: 'apartment Berlin' or 'Dublin 2BR'. | |
| status | No | Filter results by mission status. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, so the agent knows this is a safe, repeatable read operation. The description adds useful context about search scope (title, description, address fields) and practical use cases, but doesn't disclose behavioral aspects like pagination, rate limits, or authentication requirements beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence states purpose and scope, second sentence provides practical usage context. Every word earns its place, and the most important information (what the tool does) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with good annotations (readOnly, idempotent) and full schema coverage, the description provides adequate context about search scope and use cases. However, without an output schema, the description doesn't explain what results look like (structure, fields returned), which would be helpful for a search operation. The description is complete enough for basic usage but lacks details about result format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all three parameters. The description mentions 'keyword' search which aligns with the 'query' parameter, but adds no additional semantic context beyond what's in the schema. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search for missions by keyword' with specific search fields (title, description, property address). It distinguishes from 'list_missions' by specifying keyword search, but doesn't explicitly differentiate from 'get_nearby_missions' which likely uses location-based filtering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance: 'Useful for finding missions in specific locations or with specific requirements.' It suggests when to use this tool but doesn't explicitly contrast with sibling tools like 'get_nearby_missions' (location-based) or 'list_missions' (likely unfiltered listing). No explicit when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!