ImmoStage Virtual Staging
Server Details
AI virtual staging for real estate — stage rooms, beautify floor plans, classify images.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- LarryWalkerDEV/mcp-immostage
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsbeautify_floor_planAInspect
Transform 2D floor plans into stunning 3D isometric architectural renders. Provide either a public image URL or a base64-encoded image.
| Name | Required | Description | Default |
|---|---|---|---|
| quality | No | Output quality. medium=faster/cheaper, high=better detail | medium |
| image_url | No | Public URL of the floor plan image | |
| image_base64 | No | Base64-encoded image data (with or without data URI prefix). Use this when the user pastes/uploads an image directly. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (openWorld=true, readOnly=false), description adds specific output context ('isometric architectural renders') and acknowledges external resource requirements ('public image URL'). Mentions quality tradeoffs (faster/cheaper) via schema but doesn't clarify if operation is synchronous or async.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence establishes value proposition; second addresses input requirements. Front-loaded with clear action and domain.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given rich schema coverage and annotations. Could improve by specifying supported image formats or output file type, but 'renders' implies image output sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage providing strong baseline, description adds value by establishing the mutual exclusivity relationship between image_url and image_base64 ('either... or') and contextualizing when to use base64 encoding (user paste/upload scenarios).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Transform' with exact resource '2D floor plans' and output format '3D isometric architectural renders'. Distinctly differs from siblings like stage_room (staging) and classify_room (categorization) by focusing on floor plan visualization.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear input options ('either a public image URL or base64-encoded image') with implicit logic for when to use base64 (direct uploads), but lacks explicit guidance on when to choose this over sibling tools like stage_room or optimize_listing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
classify_roomARead-onlyIdempotentInspect
Classify room images - detect room type, empty/furnished, quality, suggest style. Provide either a public image URL or a base64-encoded image.
| Name | Required | Description | Default |
|---|---|---|---|
| image_url | No | Public URL of the image to classify | |
| image_base64 | No | Base64-encoded image data (with or without data URI prefix). Use this when the user pastes/uploads an image directly. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only, idempotent, non-destructive, and external-world access. Description adds valuable behavioral context by specifying what dimensions get classified/analyzed (room type, furnishing status, quality, style) beyond the safety traits in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise two-sentence structure. First sentence establishes purpose and capabilities; second addresses input requirements. No filler or redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates effectively for missing output schema by enumerating classification outputs (room type, empty/furnished, quality, style). Adequate for a 2-parameter tool with comprehensive schema annotations, though return value structure remains unspecified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with full parameter descriptions. Description summarizes the input options ('either a public image URL or a base64-encoded image') but does not add significant semantic value beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Classify') and resource ('room images') with specific detection dimensions (room type, empty/furnished, quality, style). Distinguishes from siblings like 'stage_room' and 'beautify_floor_plan', though overlap with 'suggest_style' sibling could be clearer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides prescriptive input guidance ('Provide either a public image URL or a base64-encoded image') clarifying mutual exclusivity for the two optional parameters. However, lacks explicit guidance on when to use this vs. the 'suggest_style' sibling given both mention style suggestion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
optimize_listingBRead-onlyInspect
Generate professional German property descriptions from basic listing data.
| Name | Required | Description | Default |
|---|---|---|---|
| floor | No | Floor number (optional) | |
| notes | No | Additional notes about the property | |
| rooms | Yes | Number of rooms | |
| address | Yes | Property address or location | |
| area_sqm | Yes | Living area in square meters | |
| features | No | List of features (e.g. balcony, garage) | |
| property_name | Yes | Name or title of the property |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=false, but description doesn't explain the non-deterministic generation aspect (why idempotent=false) or mention external dependencies (openWorldHint=true). Adds value by specifying 'German' language and 'professional' tone which annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with verb-fronted structure. 'Generate' is the action, qualified efficiently by 'professional German' and scoped by 'from basic listing data.' Zero redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 7-parameter tool with full schema coverage, but lacks output specification (returns string? object?) given no output schema exists. Misses opportunity to clarify idempotent behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline applies. Description provides collective context ('basic listing data') but doesn't add individual parameter semantics, constraints, or examples beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Generate') and resource ('German property descriptions') with input transformation specified ('from basic listing data'). However, lacks explicit differentiation from siblings like 'beautify_floor_plan' or 'stage_room' which handle visual aspects rather than text generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like 'classify_room' or 'suggest_style'. No mention of prerequisites or constraints beyond the implicit input requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stage_roomAInspect
AI virtual staging - transform empty room photos into beautifully furnished spaces. Provide either a public image URL or a base64-encoded image.
| Name | Required | Description | Default |
|---|---|---|---|
| style | Yes | Interior design style | |
| quality | No | Output quality. medium=faster/cheaper, high=better detail | medium |
| image_url | No | Public URL of the room image to stage | |
| room_type | Yes | Type of room | |
| image_base64 | No | Base64-encoded image data (with or without data URI prefix). Use this when the user pastes/uploads an image directly. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=false and openWorldHint=true. The description adds valuable behavioral context by specifying this is an 'AI' transformation and describing the specific visual change (empty to furnished). However, it omits critical external service details like rate limits, costs (implied by medium/high quality but not stated as behavioral constraints), or what exactly is returned (image data vs URL).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is efficient: sentence one defines the purpose, sentence two specifies the input method. No tautologies or repetition of the tool name. It could potentially mention the output format to be truly complete, but as-is there is minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and presence of annotations, the description adequately covers the input side. However, for a complex AI generation tool (openWorldHint=true, 5 parameters including quality settings), the absence of any output description (what does the staged result look like? URL? Base64? Dimensions?) leaves a significant gap, especially with no output schema provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds critical semantic information by stating users must 'Provide either a public image URL or a base64-encoded image,' clarifying that one of these two inputs is required despite neither being marked as required in the schema (only style and room_type are required). This compensates for the schema's lack of conditional requirement logic.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'transform[s] empty room photos into beautifully furnished spaces' using 'AI virtual staging'—specific verb, resource, and outcome. However, it does not explicitly differentiate from sibling tool suggest_style (which may also involve room styling), preventing a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage scenarios through the phrase 'transform empty room photos,' indicating when to use it. It also specifies the input requirement ('Provide either a public image URL or a base64-encoded image'), but lacks explicit 'when not to use' guidance or comparison to alternatives like suggest_style or beautify_floor_plan.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggest_styleBRead-onlyIdempotentInspect
Get staging style recommendations based on room type and target audience.
| Name | Required | Description | Default |
|---|---|---|---|
| room_type | Yes | Type of room to stage | |
| property_type | No | Type of property | |
| target_audience | No | Target buyer demographic |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly/idempotent). Description adds domain context ('staging') and output type ('recommendations'), but omits details about recommendation format, quantity, or confidence levels.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with core action front-loaded. No redundancy, but brevity borders on under-specification given lack of output schema documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for simple read-only recommendation tool with well-annotated schema. Lacks output description (no output schema exists) should specify what recommendations contain (color palettes, furniture types, etc.).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear enum descriptions. Description mentions two of three parameters ('room type and target audience') but adds no syntax guidance beyond schema. Baseline 3 appropriate given schema carries full semantic load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Get' + resource 'staging style recommendations' with scope implied by parameters. Distinguishes likely intent from sibling 'stage_room' (suggestions vs execution), though doesn't explicitly state this distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage by referencing required inputs ('based on room type and target audience'), but lacks explicit guidance on when to use versus 'stage_room' or whether this should be called before staging execution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!