Skip to main content
Glama

Server Details

AI-native scheduling and booking: check availability, book meetings, share links.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
book_meetingBook MeetingCInspect

Book a new meeting slot

ParametersJSON Schema
NameRequiredDescriptionDefault
pageYesThe URL of the scheduling page (e.g. https://meet.bot/your-page)
notesNoOptional notes to include with the meeting (e.g. agenda, call details)
startYesStart time in ISO 8601 format (e.g. 2025-03-10T14:00:00Z); must be an available slot
guest_nameYesFull name of the guest
guest_emailYesEmail address of the guest (used for calendar invite and confirmation)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, yet description fails to disclose critical behaviors: whether calendar invites are automatically sent, conflict handling if slot is taken, idempotency, or required permissions for booking.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is appropriately front-loaded with no waste, but is excessively brief for a 5-parameter mutation operation, resulting in under-specification rather than elegant conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a booking mutation with no annotations and no output schema, the description is insufficient. It omits booking workflow context, failure modes, side effects (email notifications), and success criteria.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions (ISO 8601 format, URL examples), so baseline 3 applies. Tool description adds no supplemental semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states 'Book a new meeting slot' which uses a clear verb and resource, but lacks specificity about what 'booking' entails (creating calendar events, sending invites) and minimally distinguishes from sibling get_available_slots.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on prerequisites (e.g., verifying availability first), when to use versus alternatives, or workflow sequencing with sibling tools like get_available_slots.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_available_slotsGet Available SlotsCInspect

Get available booking slots for a scheduling page

ParametersJSON Schema
NameRequiredDescriptionDefault
endNoEnd date for the range in YYYY-MM-DD format
pageYesThe URL of the scheduling page (e.g. https://meet.bot/your-page)
countNoMaximum number of slots to return (defaults to server limit)
startNoStart date for the range in YYYY-MM-DD format
timezoneNoIANA timezone for slot times (e.g. America/New_York, Europe/London)
booking_linkNoIf true, include shareable booking links in the response
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With empty annotations, the description carries full burden for behavioral disclosure but provides none. It does not clarify if this checks real-time availability, what happens when no slots exist, rate limits, or that the operation is read-only (critical given the 'book_meeting' sibling performs writes).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief and free of verbosity, the single 7-word sentence is under-specified rather than efficiently concise. It front-loads the action but wastes the opportunity to add contextual value in subsequent sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters, no output schema, no annotations, and sibling tools that form a booking workflow, the description is incomplete. It lacks critical context about the response format and how this tool fits into the booking lifecycle alongside 'book_meeting'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds no semantic value beyond the schema—no relationships between start/end dates are explained, nor is the optional nature of most parameters versus the required 'page' parameter highlighted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear verb ('Get') and resource ('booking slots') with scope ('for a scheduling page'). It distinguishes from sibling 'book_meeting' (retrieval vs. creation) and 'get_scheduling_pages' (specific page slots vs. page listing), though it could explicitly clarify this is a read-only availability check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it explain the typical workflow (e.g., that this should be called before 'book_meeting' to find valid times, or that 'page' parameter likely comes from 'get_scheduling_pages').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_page_infoGet Page InformationCInspect

Get information about a specific scheduling page

ParametersJSON Schema
NameRequiredDescriptionDefault
pageYesThe URL of the scheduling page (e.g. https://meet.bot/your-page)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It discloses the target resource ('scheduling page') but provides no behavioral details: what specific information is returned (owner, availability settings, duration limits?), whether responses are cached, rate limits, or error conditions. 'Get information' is too vague for a tool returning presumably rich data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, appropriately front-loaded with the action. No redundant words, though brevity sacrifices helpful context about return values or sibling differentiation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter read operation with perfect schema coverage. However, lacking an output schema, the description should ideally hint at what 'information' comprises (e.g., page settings, owner details, availability windows). Sibling tools suggest a scheduling domain, but the description doesn't leverage this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the schema fully documents the 'page' parameter as a URL with a clear example. The description adds no additional parameter guidance (e.g., whether the URL must be canonical, if shortened URLs are accepted), warranting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Provides clear verb ('Get') and resource ('information about a specific scheduling page'), distinguishing it from mutation tools. However, it fails to differentiate from sibling 'get_scheduling_pages' (which likely returns a list) — it doesn't clarify that this retrieves detailed info for a single page vs. listing multiple pages.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like 'get_scheduling_pages' or 'get_available_slots'. Doesn't indicate prerequisites (e.g., must the page exist? does it require specific permissions?).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_scheduling_pagesGet Scheduling PagesBInspect

Get all scheduling pages for the authenticated user

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden. It discloses scope ('for the authenticated user') and completeness ('all'), implying no filtering. However, lacks details on pagination, rate limits, or return format that would be expected given zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Seven words with zero waste. Front-loaded with the action and resource. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter read operation, but gaps remain: with no output schema provided, description should ideally indicate what gets returned (e.g., list of pages, metadata). Also lacks distinction from get_page_info sibling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, establishing baseline score of 4. Description does not need to compensate for missing parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Provides specific verb (Get) and resource (scheduling pages) scoped to the authenticated user. Clear what it does, though it does not explicitly differentiate from sibling get_page_info (which presumably gets specific pages rather than all).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus get_page_info or other siblings. Does not indicate prerequisites or filtering capabilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkHealth CheckAInspect

Check if the Meet.bot API client is healthy and the Bearer token is valid

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that token validation occurs, but omits what 'healthy' means, potential error responses for invalid tokens, or whether this is read-only (though implied by 'Check').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 11-word sentence with zero waste. Every element earns its place: action verb, target system, health aspect, auth aspect, and validation state.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter tool, but gaps remain: no output schema description (what indicates healthy vs unhealthy?), no mention of side effects or rate limits, and no annotation coverage to compensate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters (100% coverage by default), establishing baseline 4. Description appropriately focuses on tool behavior rather than inventing parameter documentation where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verbs ('Check') and clear resources ('Meet.bot API client', 'Bearer token'), explicitly distinguishing this diagnostic tool from functional siblings like book_meeting and get_available_slots.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clear context is provided by specifying it validates health and token status, implying use for connectivity/auth verification. However, lacks explicit guidance like 'Call before other operations' or troubleshooting scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources