Meet.bot MCP
Server Details
AI-native scheduling and booking: check availability, book meetings, share links.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsbook_meetingBook MeetingCInspect
Book a new meeting slot
| Name | Required | Description | Default |
|---|---|---|---|
| page | Yes | The URL of the scheduling page (e.g. https://meet.bot/your-page) | |
| notes | No | Optional notes to include with the meeting (e.g. agenda, call details) | |
| start | Yes | Start time in ISO 8601 format (e.g. 2025-03-10T14:00:00Z); must be an available slot | |
| guest_name | Yes | Full name of the guest | |
| guest_email | Yes | Email address of the guest (used for calendar invite and confirmation) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, yet description fails to disclose critical behaviors: whether calendar invites are automatically sent, conflict handling if slot is taken, idempotency, or required permissions for booking.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is appropriately front-loaded with no waste, but is excessively brief for a 5-parameter mutation operation, resulting in under-specification rather than elegant conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a booking mutation with no annotations and no output schema, the description is insufficient. It omits booking workflow context, failure modes, side effects (email notifications), and success criteria.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions (ISO 8601 format, URL examples), so baseline 3 applies. Tool description adds no supplemental semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states 'Book a new meeting slot' which uses a clear verb and resource, but lacks specificity about what 'booking' entails (creating calendar events, sending invites) and minimally distinguishes from sibling get_available_slots.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on prerequisites (e.g., verifying availability first), when to use versus alternatives, or workflow sequencing with sibling tools like get_available_slots.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_available_slotsGet Available SlotsCInspect
Get available booking slots for a scheduling page
| Name | Required | Description | Default |
|---|---|---|---|
| end | No | End date for the range in YYYY-MM-DD format | |
| page | Yes | The URL of the scheduling page (e.g. https://meet.bot/your-page) | |
| count | No | Maximum number of slots to return (defaults to server limit) | |
| start | No | Start date for the range in YYYY-MM-DD format | |
| timezone | No | IANA timezone for slot times (e.g. America/New_York, Europe/London) | |
| booking_link | No | If true, include shareable booking links in the response |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With empty annotations, the description carries full burden for behavioral disclosure but provides none. It does not clarify if this checks real-time availability, what happens when no slots exist, rate limits, or that the operation is read-only (critical given the 'book_meeting' sibling performs writes).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief and free of verbosity, the single 7-word sentence is under-specified rather than efficiently concise. It front-loads the action but wastes the opportunity to add contextual value in subsequent sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters, no output schema, no annotations, and sibling tools that form a booking workflow, the description is incomplete. It lacks critical context about the response format and how this tool fits into the booking lifecycle alongside 'book_meeting'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds no semantic value beyond the schema—no relationships between start/end dates are explained, nor is the optional nature of most parameters versus the required 'page' parameter highlighted.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a clear verb ('Get') and resource ('booking slots') with scope ('for a scheduling page'). It distinguishes from sibling 'book_meeting' (retrieval vs. creation) and 'get_scheduling_pages' (specific page slots vs. page listing), though it could explicitly clarify this is a read-only availability check.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it explain the typical workflow (e.g., that this should be called before 'book_meeting' to find valid times, or that 'page' parameter likely comes from 'get_scheduling_pages').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_page_infoGet Page InformationCInspect
Get information about a specific scheduling page
| Name | Required | Description | Default |
|---|---|---|---|
| page | Yes | The URL of the scheduling page (e.g. https://meet.bot/your-page) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It discloses the target resource ('scheduling page') but provides no behavioral details: what specific information is returned (owner, availability settings, duration limits?), whether responses are cached, rate limits, or error conditions. 'Get information' is too vague for a tool returning presumably rich data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, appropriately front-loaded with the action. No redundant words, though brevity sacrifices helpful context about return values or sibling differentiation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter read operation with perfect schema coverage. However, lacking an output schema, the description should ideally hint at what 'information' comprises (e.g., page settings, owner details, availability windows). Sibling tools suggest a scheduling domain, but the description doesn't leverage this context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the schema fully documents the 'page' parameter as a URL with a clear example. The description adds no additional parameter guidance (e.g., whether the URL must be canonical, if shortened URLs are accepted), warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Provides clear verb ('Get') and resource ('information about a specific scheduling page'), distinguishing it from mutation tools. However, it fails to differentiate from sibling 'get_scheduling_pages' (which likely returns a list) — it doesn't clarify that this retrieves detailed info for a single page vs. listing multiple pages.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus siblings like 'get_scheduling_pages' or 'get_available_slots'. Doesn't indicate prerequisites (e.g., must the page exist? does it require specific permissions?).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_scheduling_pagesGet Scheduling PagesBInspect
Get all scheduling pages for the authenticated user
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It discloses scope ('for the authenticated user') and completeness ('all'), implying no filtering. However, lacks details on pagination, rate limits, or return format that would be expected given zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Seven words with zero waste. Front-loaded with the action and resource. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter read operation, but gaps remain: with no output schema provided, description should ideally indicate what gets returned (e.g., list of pages, metadata). Also lacks distinction from get_page_info sibling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, establishing baseline score of 4. Description does not need to compensate for missing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Provides specific verb (Get) and resource (scheduling pages) scoped to the authenticated user. Clear what it does, though it does not explicitly differentiate from sibling get_page_info (which presumably gets specific pages rather than all).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus get_page_info or other siblings. Does not indicate prerequisites or filtering capabilities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkHealth CheckAInspect
Check if the Meet.bot API client is healthy and the Bearer token is valid
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that token validation occurs, but omits what 'healthy' means, potential error responses for invalid tokens, or whether this is read-only (though implied by 'Check').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single 11-word sentence with zero waste. Every element earns its place: action verb, target system, health aspect, auth aspect, and validation state.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter tool, but gaps remain: no output schema description (what indicates healthy vs unhealthy?), no mention of side effects or rate limits, and no annotation coverage to compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters (100% coverage by default), establishing baseline 4. Description appropriately focuses on tool behavior rather than inventing parameter documentation where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verbs ('Check') and clear resources ('Meet.bot API client', 'Bearer token'), explicitly distinguishing this diagnostic tool from functional siblings like book_meeting and get_available_slots.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clear context is provided by specifying it validates health and token status, implying use for connectivity/auth verification. However, lacks explicit guidance like 'Call before other operations' or troubleshooting scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!