claw.cleaning
Server Details
Book a San Francisco apartment cleaning. $40/hr, weekends 8am-6pm PT, SF only.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- GetColby/claw-cleaning
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 4 of 4 tools scored.
Each tool has a distinct and non-overlapping purpose: check_availability lists slots, check_booking_status retrieves customer bookings, initiate_booking starts a booking with conditional logic, and force_checkout_booking forces a checkout URL. The descriptions clearly differentiate their roles, eliminating any ambiguity in tool selection.
All tool names follow a consistent verb_noun pattern with snake_case (e.g., check_availability, initiate_booking). This uniformity makes the tool set predictable and easy to understand, enhancing usability for agents.
With 4 tools, the set is well-scoped for a cleaning booking server, covering core operations like availability checking, booking initiation, status retrieval, and a specialized checkout override. Each tool serves a necessary function without redundancy or bloat.
The tool set provides strong coverage for the cleaning booking domain, including availability, booking creation, status checking, and handling payment failures. A minor gap exists in not having tools for updating or canceling bookings, but agents can likely work around this given the server's focus on core workflows.
Available Tools
4 toolscheck_availabilityAInspect
List available cleaning slots. Saturdays and Sundays only, 8 AM – 6 PM PT, San Francisco only. Omit date for the next 8 upcoming weekend days.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | Optional YYYY-MM-DD date (must be Saturday or Sunday). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: time constraints (Saturdays/Sundays 8 AM–6 PM PT), geographic limitation (San Francisco only), default behavior when 'date' is omitted (next 8 upcoming weekend days), and the read-only nature implied by 'List'. It doesn't mention rate limits or error conditions, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded. The first sentence establishes the core purpose, followed by specific constraints and parameter guidance. Every sentence earns its place with essential operational information, and there's zero wasted verbiage or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no annotations and no output schema, the description provides excellent context about constraints, defaults, and operational scope. It doesn't describe the return format or structure, which would be helpful given no output schema, but covers most other aspects well for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage for the single optional parameter, but the description adds valuable semantic context beyond the schema. It explains the default behavior when 'date' is omitted (next 8 upcoming weekend days) and reinforces that dates must be Saturday or Sunday, which complements the schema's description. This additional guidance enhances parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List available cleaning slots') and resource ('cleaning slots'), with precise scope constraints (weekends only, specific hours, San Francisco only). It distinguishes from potential siblings like 'check_booking_status' (status checking) or 'initiate_booking' (booking creation) by focusing solely on availability listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('List available cleaning slots') and includes implicit exclusions (only weekends, specific hours/location). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, which would be needed for a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_booking_statusAInspect
List the bookings for a customer by email, with payment/calendar status for each. Returns most recent first. Covers both first-time (Stripe Checkout) and repeat (direct charge) bookings. Stripe search indexing can lag a few seconds behind a brand-new booking.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Customer email used during checkout. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond basic functionality: it specifies the return order ('most recent first'), covers different booking types, and warns about Stripe search indexing lag ('can lag a few seconds behind a brand-new booking'), which are valuable behavioral traits not inferable from the schema alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. Each sentence adds value: the first states the action and scope, the second covers ordering and booking types, and the third provides a critical warning about indexing lag. There is minimal waste, though it could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (1 parameter, no output schema, no annotations), the description is fairly complete. It covers purpose, behavior, and limitations, but lacks details on error handling or response format. With no output schema, it could benefit from more information on return values, but it provides sufficient context for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents the 'email' parameter. The description adds marginal value by clarifying the email is 'used during checkout', but this is redundant with the schema's description. Baseline 3 is appropriate as the schema does the heavy lifting for parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('List the bookings') and resources ('for a customer by email'), and distinguishes it from siblings by specifying it provides 'payment/calendar status for each' and covers 'both first-time (Stripe Checkout) and repeat (direct charge) bookings', unlike availability or booking initiation tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'customer by email' and 'most recent first', but does not explicitly state when to use this tool versus alternatives like 'check_availability' or 'initiate_booking'. It provides some guidance on scope but lacks explicit when/when-not instructions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
force_checkout_bookingAInspect
Like initiate_booking, but always returns a Stripe checkout URL even for returning customers. Use this when a returning customer's saved card was declined and you want to give them a fresh browser checkout to try another card.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | YYYY-MM-DD. Must be Saturday or Sunday. | |
| name | Yes | Customer's full name. | |
| Yes | Customer email. | ||
| hours | Yes | Number of hours (1–8). $40/hour. | |
| address | Yes | Full street address. Must be in San Francisco, CA. | |
| startTime | Yes | 24h HH:MM, between 08:00 and 18:00 PT. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it returns a Stripe checkout URL (output behavior), is intended for returning customers with declined cards (context), and forces a fresh browser checkout (action). However, it doesn't mention potential side effects like creating a booking record or authentication needs, leaving some gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and followed by usage guidance. Every sentence earns its place by providing essential information without redundancy. It's appropriately sized and structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation with 6 required parameters) and no annotations or output schema, the description is mostly complete. It explains the purpose, usage, and behavioral context well. However, it lacks details on what the tool does beyond returning a URL (e.g., if it creates a booking record) and any error handling, which could be useful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description adds no parameter-specific information beyond what the schema provides. According to the rules, with high schema coverage (>80%), the baseline is 3 even with no param info in the description, which applies here.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'always returns a Stripe checkout URL even for returning customers.' It specifies the verb ('returns'), resource ('Stripe checkout URL'), and distinguishes it from sibling 'initiate_booking' by explaining the different behavior for returning customers. This is specific and avoids tautology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Use this when a returning customer's saved card was declined and you want to give them a fresh browser checkout to try another card.' It names the alternative ('initiate_booking') and provides clear context for when to choose this tool over it, making usage guidance comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
initiate_bookingAInspect
Start a booking. Behavior depends on the customer's payment history:
First-time customer (no saved card for this email): returns
{ status: "checkout_required", checkoutUrl }. Share the URL; the customer pays once in their browser and their card is saved for next time.Returning customer (has a saved card): charges the card on file immediately, creates the calendar event, and returns
{ status: "booked" }. No URL needed.Charge failure on a returning customer: returns
{ error, code, requiresAction }. Tell the customer the card was declined and offer to retry — a retry will fall back to a fresh checkout URL they can use with a different card. Always show the customer a full booking preview (date, start, hours, address, total, email) and get explicit confirmation before calling this tool.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | YYYY-MM-DD. Must be Saturday or Sunday. | |
| name | Yes | Customer's full name. | |
| Yes | Customer email. Used to identify repeat customers and deliver the calendar invite. | ||
| hours | Yes | Number of hours (1–8). $40/hour. | |
| address | Yes | Full street address. Must be in San Francisco, CA. | |
| startTime | Yes | 24h HH:MM, between 08:00 and 18:00 PT. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It thoroughly describes behavioral traits: payment flow variations (first-time vs. returning customers), outcomes (checkout URL, booked status, error handling), and critical constraints (preview and confirmation requirement). This covers mutation, authentication needs (via email), and error handling beyond what a schema would provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose and then detailing behavioral logic in bullet points. Every sentence adds value, such as explaining payment flows and prerequisites, with no redundant information. Minor improvements could include tighter phrasing, but it remains efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (booking initiation with payment logic), no annotations, and no output schema, the description provides complete context. It covers behavioral outcomes, error handling, prerequisites, and usage guidelines, compensating for the lack of structured fields. This ensures an agent has sufficient information to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all parameters clearly (e.g., date format, time range, address location). The description does not add specific parameter semantics beyond the schema, such as explaining how email is used for identification or pricing details, but it implies context (e.g., email for repeat customers). Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose as 'Start a booking' and elaborates on the booking initiation process with specific behavioral outcomes based on customer payment history. It clearly distinguishes this from sibling tools like check_availability (which checks availability), check_booking_status (which checks status), and force_checkout_booking (which forces checkout), making the verb+resource distinction precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Always show the customer a full booking preview (date, start, hours, address, total, email) and get explicit confirmation before calling this tool.' It also implies when not to use it (e.g., without confirmation or preview) and references alternatives indirectly through sibling tools, ensuring clear context for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!