Skip to main content
Glama

Server Details

AI virtual staging for real estate — stage rooms, beautify floor plans, classify images.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
LarryWalkerDEV/mcp-immostage
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
beautify_floor_planAInspect

Transform 2D floor plans into stunning 3D isometric architectural renders. Provide either a public image URL or a base64-encoded image.

ParametersJSON Schema
NameRequiredDescriptionDefault
qualityNoOutput quality. medium=faster/cheaper, high=better detailmedium
image_urlNoPublic URL of the floor plan image
image_base64NoBase64-encoded image data (with or without data URI prefix). Use this when the user pastes/uploads an image directly.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (openWorld=true, readOnly=false), description adds specific output context ('isometric architectural renders') and acknowledges external resource requirements ('public image URL'). Mentions quality tradeoffs (faster/cheaper) via schema but doesn't clarify if operation is synchronous or async.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes value proposition; second addresses input requirements. Front-loaded with clear action and domain.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete given rich schema coverage and annotations. Could improve by specifying supported image formats or output file type, but 'renders' implies image output sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage providing strong baseline, description adds value by establishing the mutual exclusivity relationship between image_url and image_base64 ('either... or') and contextualizing when to use base64 encoding (user paste/upload scenarios).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'Transform' with exact resource '2D floor plans' and output format '3D isometric architectural renders'. Distinctly differs from siblings like stage_room (staging) and classify_room (categorization) by focusing on floor plan visualization.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear input options ('either a public image URL or base64-encoded image') with implicit logic for when to use base64 (direct uploads), but lacks explicit guidance on when to choose this over sibling tools like stage_room or optimize_listing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

classify_roomA
Read-onlyIdempotent
Inspect

Classify room images - detect room type, empty/furnished, quality, suggest style. Provide either a public image URL or a base64-encoded image.

ParametersJSON Schema
NameRequiredDescriptionDefault
image_urlNoPublic URL of the image to classify
image_base64NoBase64-encoded image data (with or without data URI prefix). Use this when the user pastes/uploads an image directly.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only, idempotent, non-destructive, and external-world access. Description adds valuable behavioral context by specifying what dimensions get classified/analyzed (room type, furnishing status, quality, style) beyond the safety traits in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise two-sentence structure. First sentence establishes purpose and capabilities; second addresses input requirements. No filler or redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates effectively for missing output schema by enumerating classification outputs (room type, empty/furnished, quality, style). Adequate for a 2-parameter tool with comprehensive schema annotations, though return value structure remains unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with full parameter descriptions. Description summarizes the input options ('either a public image URL or a base64-encoded image') but does not add significant semantic value beyond what the schema already provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Classify') and resource ('room images') with specific detection dimensions (room type, empty/furnished, quality, style). Distinguishes from siblings like 'stage_room' and 'beautify_floor_plan', though overlap with 'suggest_style' sibling could be clearer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides prescriptive input guidance ('Provide either a public image URL or a base64-encoded image') clarifying mutual exclusivity for the two optional parameters. However, lacks explicit guidance on when to use this vs. the 'suggest_style' sibling given both mention style suggestion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

optimize_listingB
Read-only
Inspect

Generate professional German property descriptions from basic listing data.

ParametersJSON Schema
NameRequiredDescriptionDefault
floorNoFloor number (optional)
notesNoAdditional notes about the property
roomsYesNumber of rooms
addressYesProperty address or location
area_sqmYesLiving area in square meters
featuresNoList of features (e.g. balcony, garage)
property_nameYesName or title of the property
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=false, but description doesn't explain the non-deterministic generation aspect (why idempotent=false) or mention external dependencies (openWorldHint=true). Adds value by specifying 'German' language and 'professional' tone which annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with verb-fronted structure. 'Generate' is the action, qualified efficiently by 'professional German' and scoped by 'from basic listing data.' Zero redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 7-parameter tool with full schema coverage, but lacks output specification (returns string? object?) given no output schema exists. Misses opportunity to clarify idempotent behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline applies. Description provides collective context ('basic listing data') but doesn't add individual parameter semantics, constraints, or examples beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Generate') and resource ('German property descriptions') with input transformation specified ('from basic listing data'). However, lacks explicit differentiation from siblings like 'beautify_floor_plan' or 'stage_room' which handle visual aspects rather than text generation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings like 'classify_room' or 'suggest_style'. No mention of prerequisites or constraints beyond the implicit input requirement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stage_roomAInspect

AI virtual staging - transform empty room photos into beautifully furnished spaces. Provide either a public image URL or a base64-encoded image.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleYesInterior design style
qualityNoOutput quality. medium=faster/cheaper, high=better detailmedium
image_urlNoPublic URL of the room image to stage
room_typeYesType of room
image_base64NoBase64-encoded image data (with or without data URI prefix). Use this when the user pastes/uploads an image directly.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=false and openWorldHint=true. The description adds valuable behavioral context by specifying this is an 'AI' transformation and describing the specific visual change (empty to furnished). However, it omits critical external service details like rate limits, costs (implied by medium/high quality but not stated as behavioral constraints), or what exactly is returned (image data vs URL).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The two-sentence structure is efficient: sentence one defines the purpose, sentence two specifies the input method. No tautologies or repetition of the tool name. It could potentially mention the output format to be truly complete, but as-is there is minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage and presence of annotations, the description adequately covers the input side. However, for a complex AI generation tool (openWorldHint=true, 5 parameters including quality settings), the absence of any output description (what does the staged result look like? URL? Base64? Dimensions?) leaves a significant gap, especially with no output schema provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds critical semantic information by stating users must 'Provide either a public image URL or a base64-encoded image,' clarifying that one of these two inputs is required despite neither being marked as required in the schema (only style and room_type are required). This compensates for the schema's lack of conditional requirement logic.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'transform[s] empty room photos into beautifully furnished spaces' using 'AI virtual staging'—specific verb, resource, and outcome. However, it does not explicitly differentiate from sibling tool suggest_style (which may also involve room styling), preventing a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage scenarios through the phrase 'transform empty room photos,' indicating when to use it. It also specifies the input requirement ('Provide either a public image URL or a base64-encoded image'), but lacks explicit 'when not to use' guidance or comparison to alternatives like suggest_style or beautify_floor_plan.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

suggest_styleB
Read-onlyIdempotent
Inspect

Get staging style recommendations based on room type and target audience.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_typeYesType of room to stage
property_typeNoType of property
target_audienceNoTarget buyer demographic
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly/idempotent). Description adds domain context ('staging') and output type ('recommendations'), but omits details about recommendation format, quantity, or confidence levels.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with core action front-loaded. No redundancy, but brevity borders on under-specification given lack of output schema documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for simple read-only recommendation tool with well-annotated schema. Lacks output description (no output schema exists) should specify what recommendations contain (color palettes, furniture types, etc.).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear enum descriptions. Description mentions two of three parameters ('room type and target audience') but adds no syntax guidance beyond schema. Baseline 3 appropriate given schema carries full semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' + resource 'staging style recommendations' with scope implied by parameters. Distinguishes likely intent from sibling 'stage_room' (suggestions vs execution), though doesn't explicitly state this distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage by referencing required inputs ('based on room type and target audience'), but lacks explicit guidance on when to use versus 'stage_room' or whether this should be called before staging execution.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.