Skip to main content
Glama

Philadelphia Restoration

Server Details

Philadelphia water and fire damage restoration: assessment, insurance, costs, and knowledge search.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

7 tools
assess_damageB
Destructive
Inspect

Returns structured damage assessment for water and fire damage in Philadelphia residential properties. Classifies 13 damage types by severity, provides prioritized immediate safety actions, estimates restoration costs with Philadelphia market rates, and includes neighborhood-specific risk context. Based on IICRC S500/S520/S700 standards and field experience from Philadelphia restoration companies.

ParametersJSON Schema
NameRequiredDescriptionDefault
damage_typeYesType of water or fire damage to assess
neighborhoodNoPhiladelphia neighborhood (e.g., 'fishtown', 'center-city')
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, but description uses read-only language ('Returns', 'assess') giving no indication of what gets modified or destroyed. Fails to explain openWorldHint implications. Description implies a safe read operation while annotations warn of data mutation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two information-dense sentences with zero waste. Front-loaded with core purpose, followed by specific capabilities (classifications, actions, costs), and closes with standards authority. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description adequately previews return structure (damage types, safety actions, cost estimates, risk context). However, fails to explain destructive side effects indicated in annotations, leaving a critical gap for an agent determining when to invoke this safely.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, description adds valuable domain context: explicitly references '13 damage types' (matching the enum) and clarifies 'Philadelphia market rates' and 'neighborhood-specific risk context', explaining why the neighborhood parameter matters beyond its string type.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Returns structured damage assessment' provides specific verb and resource, 'water and fire damage' defines scope, and 'Philadelphia residential properties' establishes geographic/domain constraints that distinguish it from generic assessment tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus siblings like `estimate_cost` (which also estimates costs) or `get_emergency_steps` (which also provides safety actions). No prerequisites or exclusion criteria provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_insurance_coverageA
Destructive
Inspect

Returns Pennsylvania insurance coverage analysis for water and fire damage claims. Evaluates coverage likelihood by HO policy type (HO-1 through HO-6), applies PA-specific regulations including bad faith statute (42 Pa.C.S. § 8371) and Act 119, and provides step-by-step claims process guidance with common denial reasons and appeals strategies.

ParametersJSON Schema
NameRequiredDescriptionDefault
causeNoSpecific cause of damage (e.g., 'frozen pipes', 'grease fire')
damage_typeYesType of water or fire damage
policy_typeNoHomeowner's insurance policy type (default: HO-3)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable legal context (42 Pa.C.S. § 8371, Act 119) beyond annotations, explaining what regulations are applied. However, fails to explain destructiveHint=true (what state is modified?) or openWorldHint implications (external data sources?), which is critical given the mutation annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single dense sentence with zero waste. Front-loaded with the core return value. Information-packed but slightly complex; could benefit from sentence division for readability while maintaining density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates somewhat for missing output schema by detailing return content (analysis, step-by-step guidance, appeals strategies). However, leaves destructive side effects unexplained despite the hint being set, leaving agents unaware of state changes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description mentions 'HO policy type (HO-1 through HO-6)' reinforcing the enum and 'water and fire damage' context, but doesn't add syntax details or input formats beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specific verb ('Returns') + resource ('Pennsylvania insurance coverage analysis') + scope ('water and fire damage claims'). Distinguishes from siblings like assess_damage and estimate_cost by focusing on insurance policy analysis and legal regulations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage context through PA-specificity and policy type scoping, but lacks explicit 'when to use' guidance or comparison to siblings like assess_damage. The Pennsylvania constraint acts as an implicit filter but isn't framed as a usage instruction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

estimate_costB
Destructive
Inspect

Returns Philadelphia-area restoration cost estimates broken down by service type. Includes per-unit pricing (per sq ft, per hour), labor rates ($64-$183/hr), factors that increase or decrease total cost (pre-1978 homes, rowhouse access, code upgrades), and insurance deductible guidance. Based on current Philadelphia metro market data.

ParametersJSON Schema
NameRequiredDescriptionDefault
severityYesAssessed severity level
damage_typeYesType of damage for cost estimation
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructiveHint=true and idempotentHint=false, indicating state mutation and non-idempotent behavior, yet description provides zero explanation of what gets created, modified, or stored. Adds valuable context about pricing methodology (market data, factors affecting cost) but fails to disclose side effects critical for agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense, well-structured sentences. Front-loads core value proposition (Philadelphia-area estimates by service type) in first sentence; second sentence enumerates specific data points returned. Every clause delivers information about scope, content, or data provenance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates adequately for missing output schema by detailing return contents (labor rates, factors, insurance guidance). However, incomplete given destructive annotations—fails to explain persistence side effects that would affect retry logic and user expectations. No mention of relationship to sibling assess_damage tool despite likely workflow dependency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for damage_type and severity. Description mentions contextual factors (pre-1978 homes, rowhouse access) that implicitly guide damage_type selection but does not explicitly map these to parameter values or add semantic constraints beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific action (returns restoration cost estimates), scope (Philadelphia-area), and granularity (broken down by service type, per-unit pricing). Distinguishes from siblings by focusing specifically on cost/pricing data versus assessment or general knowledge. Lacks explicit differentiation from assess_damage which likely precedes this tool in workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use versus alternatives, prerequisites (e.g., requiring prior damage assessment), or workflow sequencing. Mentions 'insurance deductible guidance' implying use during claims/budgeting phases but does not state when to prefer this over check_insurance_coverage or request_callback.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_emergency_stepsA
Destructive
Inspect

Returns prioritized, time-critical emergency action steps for active water or fire damage in Philadelphia. Includes safety warnings, step-by-step instructions with time sensitivity flags, Philadelphia emergency contacts (PWD, PECO, PGW, Philadelphia Fire Department), and documentation checklist for insurance claims. Use this tool FIRST when a homeowner has an active emergency.

ParametersJSON Schema
NameRequiredDescriptionDefault
damage_typeYesType of active emergency damage
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

CRITICAL CONTRADICTION: Description uses read-only framing 'Returns... steps' implying lookup, but annotations declare destructiveHint=true and readOnlyHint=false. This suggests the tool performs destructive write operations (potentially dispatching emergency services or creating case records) that the description completely omits, creating dangerous ambiguity for an emergency tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences. First sentence front-loaded with deliverables (steps, warnings, contacts, checklist). Second sentence provides clear usage priority. Zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description comprehensively details return content (safety warnings, contacts, checklists). However, fails to explain the destructive nature of the operation indicated by annotations—a critical gap for an emergency tool where side effects matter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema has 100% coverage (baseline 3), description adds valuable semantic context by specifying expected values 'water or fire damage' for the damage_type parameter, helping constrain valid inputs beyond the generic schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Returns' with resource 'prioritized emergency action steps', scope limited to 'Philadelphia' and 'water or fire damage'. Distinguishes from sibling assess_damage by focusing on actionable steps rather than assessment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit prioritization instruction: 'Use this tool FIRST when a homeowner has an active emergency' clearly establishes when to use over siblings like assess_damage or search_restoration_knowledge (non-emergency contexts).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_local_infoA
Destructive
Inspect

Returns Philadelphia-specific local information for damage restoration. Covers 6+ neighborhoods with housing stock analysis (rowhouses, twins, pre-war construction), common damage patterns, flood risk levels, emergency utility contacts, building code requirements, and seasonal risk factors. Helps agents provide neighborhood-aware guidance to Philadelphia homeowners.

ParametersJSON Schema
NameRequiredDescriptionDefault
info_typeNoType of info: neighborhood-specific or city-wide (default: city)
neighborhoodNoPhiladelphia neighborhood slug (e.g., 'fishtown', 'south-philly')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and openWorldHint=true (external dependencies), but description fails to explain what gets destroyed, why a 'get' operation has side effects, or the nature of external data source dependencies. Only lists data content without behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences. Front-loaded with action and scope. Second sentence densely lists coverage areas. Third sentence clarifies value proposition. No redundancies or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description compensates by enumerating return content categories (housing types, damage patterns, utility contacts, codes). Adequate for simple 2-parameter tool, though could mention response format or the destructive side effects implied by annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description adds '6+ neighborhoods' context and lists specific data categories (housing stock, flood risk) which aligns with the neighborhood parameter, but doesn't add syntax, validation rules, or enum semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Returns' + specific resource 'Philadelphia-specific local information' + domain context 'damage restoration'. Distinguishes from siblings like assess_damage or search_restoration_knowledge by emphasizing geographic/neighborhood specificity and local housing stock data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage ('Helps agents provide neighborhood-aware guidance') but lacks explicit guidance on when to use this vs search_restoration_knowledge or assess_damage. No mention of when city-wide vs neighborhood-specific mode is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

request_callbackA
Destructive
Inspect

Submits a callback request for a Philadelphia homeowner dealing with water or fire damage. A restoration concierge calls back within 15 minutes during business hours (Mon-Fri 8am-6pm ET) to assess the situation and connect with vetted local professionals. Requires phone number and situation description. Returns a reference number for tracking.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoHomeowner's name (optional)
phoneYesPhone number for callback (US format)
situationNoBrief description of the damage situation
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Substantial value beyond annotations: discloses timing constraints (15 minutes, Mon-Fri 8am-6pm ET), process details (restoration concierge assesses and connects with vetted professionals), and return value (reference number). Aligns with destructiveHint=true by clarifying this creates a real-world commitment (someone will actually call).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: scope/purpose, operational details/timing, and prerequisites/returns. No redundancy with schema or annotations. Every sentence earns its place with specific actionable constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete coverage despite no output schema: description explicitly states return value (reference number), geographic constraints (Philadelphia), temporal constraints (business hours), and required parameters. Annotations cover safety profile. Sufficient for agent to invoke confidently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for name, phone, and situation. Description reinforces that phone is required ('Requires phone number') but adds no significant semantic depth beyond the schema definitions. Appropriate baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with 'Submits a callback request' (verb + resource) and clear scope restriction to 'Philadelphia homeowner dealing with water or fire damage.' Explicitly distinguishes from siblings like get_emergency_steps (immediate actions) or assess_damage (evaluation) by focusing on the human callback mechanism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clear prerequisites stated ('Requires phone number and situation description') and implicit context via Philadelphia/business hours constraints. However, lacks explicit comparison to siblings (e.g., when to use get_emergency_steps vs this callback request) or explicit 'do not use if outside Philadelphia' warning.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_restoration_knowledgeA
Destructive
Inspect

Semantic search across 60+ expert documents covering water and fire damage restoration. Topics include drying science, moisture mapping, equipment protocols, mold prevention (IICRC S520), fire restoration (IICRC S700), insurance adjuster tactics, contractor evaluation, and Philadelphia housing patterns. Returns relevant excerpts with source citations and relevance scores. Grounded in IICRC standards with section numbers.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language search query (max 1000 characters)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, which is unusual for a search tool, yet the description does not explain why searching knowledge would be destructive or modify state. However, it adds valuable corpus context (60+ documents, IICRC standards with section numbers) beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences with clear information hierarchy: purpose first, then topic specifics, then return format, then authority grounding. The topic list is long but necessary for domain clarity. No redundant or obvious statements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by explicitly describing return value structure ('relevant excerpts with source citations and relevance scores'). Domain grounding (IICRC standards) is appropriate for the complexity. Would benefit from clarification of the destructive/openWorld annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema fully documents the query parameter (100% coverage), the description adds significant semantic value by listing valid query domains (drying science, mold prevention, insurance adjuster tactics), guiding users toward effective queries without repeating the schema's 'Natural language search query' definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Semantic search across 60+ expert documents' provides clear verb, resource, and scope. The detailed topic list (drying science, IICRC S520/S700, Philadelphia housing) clearly distinguishes this from siblings like assess_damage (assessment), get_emergency_steps (procedural), or request_callback (communication).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The extensive topic list provides implicit usage guidance on what questions are appropriate, but there is no explicit comparison to siblings (e.g., 'use this for research versus assess_damage for initial evaluation'). No 'when not to use' guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources