mcp-server
Server Details
Agent-first meeting schedule polls for humans and agents. Create polls, vote, find times.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 6 of 6 tools scored.
Every tool has a clearly distinct purpose with no overlap: create, close, reopen, vote, get results (participant view), and get admin view (full details). The descriptions explicitly differentiate tools like meetlark_get_results and meetlark_get_admin_view, preventing confusion.
All tools follow a consistent verb_noun pattern with the 'meetlark_' prefix: meetlark_create_poll, meetlark_close_poll, meetlark_reopen_poll, meetlark_vote, meetlark_get_results, meetlark_get_admin_view. This predictable naming aids agent selection and understanding.
Six tools are well-scoped for a scheduling poll server, covering the full lifecycle: creation, voting, status checks (both participant and admin views), closing, and reopening. Each tool earns its place without redundancy or bloat.
The toolset provides complete CRUD/lifecycle coverage for scheduling polls: create, read (both participant and admin views), update (via vote and reopen), and delete (via close). There are no obvious gaps, enabling agents to manage polls end-to-end.
Available Tools
6 toolsmeetlark_close_pollAInspect
Close a poll to stop accepting new votes. Use the admin token from poll creation.
| Name | Required | Description | Default |
|---|---|---|---|
| adminToken | Yes | The adm_xxx token from poll creation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the admin token requirement, which is useful for authentication context, but does not cover other behavioral traits such as whether this action is reversible, potential side effects, or error handling. It adds some value but leaves gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and includes a concise prerequisite in the second. Both sentences earn their place by providing essential information without any waste, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (a mutation action with one parameter) and no annotations or output schema, the description is somewhat complete but lacks details on behavioral aspects like reversibility or response format. It covers the basic purpose and prerequisites but could benefit from more context to fully guide an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the adminToken parameter fully documented in the schema. The description adds minimal semantic context by linking the token to 'poll creation', but does not provide additional details beyond what the schema already specifies. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Close a poll') and the outcome ('to stop accepting new votes'), distinguishing it from sibling tools like meetlark_reopen_poll or meetlark_vote. It directly addresses what the tool does without being vague or tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('to stop accepting new votes') and implies prerequisites ('Use the admin token from poll creation'), but does not explicitly state when not to use it or name alternatives like meetlark_reopen_poll. This gives good guidance but lacks explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meetlark_create_pollAInspect
Create a scheduling poll to find a time that works for a group. Returns an admin URL (keep private) and a participation URL (share with participants). The creator's email must be verified -- if not yet verified, a verification email is sent automatically. Ask the user to check their inbox and click the link, then retry.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | What is this poll for? | |
| timeSlots | Yes | Available time slots to vote on | |
| creatorName | No | Name of the poll creator | |
| description | No | Additional context for participants | |
| creatorEmail | Yes | Email address of the poll creator |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it returns two different URLs with privacy guidance, explains the email verification requirement with automatic email sending, and provides error recovery instructions. It doesn't mention rate limits or authentication details, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly front-loaded with the core purpose, followed by important behavioral details. Every sentence earns its place: the first explains what it does and returns, the second covers the verification requirement, and the third provides recovery instructions. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no annotations and no output schema, the description does an excellent job covering the essential context: purpose, return values, and important behavioral constraints. It could be slightly more complete by mentioning what happens if creation fails beyond verification issues, but it's very thorough given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description doesn't add any additional parameter semantics beyond what's in the schema properties, so it meets the baseline expectation but doesn't provide extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Create a scheduling poll') and resource ('to find a time that works for a group'), distinguishing it from sibling tools like close_poll, get_results, or vote. It provides a complete picture of what the tool does beyond just the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (to create a poll for group scheduling) and mentions a prerequisite (creator's email verification). However, it doesn't explicitly contrast when to use this versus alternatives like meetlark_vote or meetlark_get_results, which would be needed for a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meetlark_get_admin_viewAInspect
Get full poll details including all votes and participants. This is the only way to see who voted and how they voted. Returns poll info, time slots, every participant's votes (yes/maybe/no per slot), participant emails, and admin URLs. Use the admin token from poll creation. Use this to check vote status or results.
| Name | Required | Description | Default |
|---|---|---|---|
| adminToken | Yes | The adm_xxx token from poll creation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it requires admin-level access ('admin token from poll creation'), reveals sensitive information ('who voted and how they voted'), and describes the comprehensive return data structure. It doesn't mention rate limits, error conditions, or pagination, but covers the essential security and data scope aspects adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three sentences that each earn their place: first states purpose and uniqueness, second details return data, third provides usage context and prerequisites. No wasted words, and key information is front-loaded about what makes this tool distinct.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (admin-only poll viewing), no annotations, and no output schema, the description does well by explaining the required authentication, what data is returned, and when to use it. It could be more complete by specifying the exact format of return values or error cases, but covers the essential context for an agent to understand and invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single parameter (adminToken). The description adds minimal value beyond the schema by mentioning 'admin token from poll creation' which slightly reinforces the schema's description. This meets the baseline of 3 when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full poll details') and resource ('poll'), including what information is retrieved ('all votes and participants', 'poll info, time slots, every participant's votes, participant emails, and admin URLs'). It distinguishes from siblings by emphasizing it's 'the only way to see who voted and how they voted', contrasting with tools like meetlark_get_results which likely shows aggregated results without individual voter details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided on when to use this tool ('Use this to check vote status or results') and when not to use alternatives (implied by stating it's 'the only way to see who voted and how they voted'). It also specifies prerequisites ('Use the admin token from poll creation'), making it clear this is for poll administrators only, not general voters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meetlark_get_resultsAInspect
Get poll details and available time slots using a participation token. Returns the poll title, description, status, and time slots -- but NOT votes or participant data. This is the view a participant sees before voting. To see who voted and how, use meetlark_get_admin_view instead.
| Name | Required | Description | Default |
|---|---|---|---|
| participateToken | Yes | The prt_xxx token from the participation URL |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It effectively discloses key behavioral traits: it's a read-only operation (implied by 'Get'), returns specific data fields while excluding others (votes/participant data), and clarifies the user perspective ('participant sees before voting'). However, it doesn't mention potential errors, rate limits, or authentication needs beyond the token.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage context and sibling differentiation. Every sentence adds value: the first defines the tool, the second clarifies exclusions, and the third provides guidelines. It's efficient with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (read operation with one parameter), no annotations, and no output schema, the description does well by explaining what data is returned and excluded, and when to use it. However, it doesn't detail the return format (e.g., structure of time slots) or error cases, leaving some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter. The description adds minimal value beyond the schema by mentioning 'participation token' and 'participation URL', but doesn't provide additional syntax or format details. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get poll details and available time slots') and the resource ('using a participation token'), distinguishing it from siblings by specifying it returns 'the poll title, description, status, and time slots -- but NOT votes or participant data'. This is precise and avoids tautology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use this tool ('This is the view a participant sees before voting') and provides a clear alternative ('To see who voted and how, use meetlark_get_admin_view instead'), which directly addresses sibling differentiation and usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meetlark_reopen_pollAInspect
Reopen a previously closed poll to accept votes again. Use the admin token from poll creation.
| Name | Required | Description | Default |
|---|---|---|---|
| adminToken | Yes | The adm_xxx token from poll creation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the need for an 'admin token from poll creation', which hints at authentication requirements, but lacks details on permissions, rate limits, error handling, or what happens upon reopening (e.g., if votes are reset). This leaves significant gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and concise, consisting of two sentences that directly address the tool's purpose and a key requirement. Every sentence earns its place without redundancy, making it efficient and easy to understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a mutation with one parameter) and the lack of annotations and output schema, the description is minimally adequate. It covers the basic action and authentication need but omits details on behavioral outcomes, error cases, or return values, leaving room for improvement in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the parameter 'adminToken' fully documented in the schema. The description adds minimal value by restating the need for the 'admin token from poll creation', but does not provide additional syntax or format details beyond what the schema already covers, aligning with the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Reopen a previously closed poll') and the resource ('poll'), with the verb 'reopen' distinguishing it from sibling tools like 'close_poll' or 'create_poll'. It precisely defines the tool's function without being tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Reopen a previously closed poll to accept votes again'), implying it should be used after a poll has been closed. However, it does not explicitly mention when not to use it or name alternatives (e.g., using 'create_poll' for a new poll instead), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
meetlark_voteAInspect
Cast a vote on a scheduling poll. Each time slot can be voted yes, maybe, or no. Requires participant name and email. Use the participation token from the poll's participate URL. If the email matches a previous voter on this poll, their response is updated instead of creating a duplicate.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Name to display in results | |
| Yes | Participant's email address (required). Used to identify returning voters and send confirmation email. | ||
| votes | Yes | Votes for each time slot | |
| participateToken | Yes | The prt_xxx token from the participation URL |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains key behaviors: the update-or-create logic ('If the email matches a previous voter on this poll, their response is updated instead of creating a duplicate'), the voting options per time slot, and the required authentication mechanism (participation token). It doesn't mention rate limits, error conditions, or confirmation details, but covers the essential mutation behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded: the first sentence states the core purpose, followed by essential details about voting options, requirements, token usage, and update behavior. Every sentence earns its place with no wasted words, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description does a good job covering the essential context: what the tool does, how to use it, and key behavioral traits. It could be more complete by mentioning potential errors, response format, or side effects (e.g., email notifications), but given the clear schema and focused purpose, it's largely adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all four parameters. The description adds minimal additional semantics beyond the schema—it mentions that email is used 'to identify returning voters and send confirmation email' (which is partly in the schema) and that votes are 'for each time slot'. This meets the baseline of 3 when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Cast a vote'), resource ('on a scheduling poll'), and scope ('Each time slot can be voted yes, maybe, or no'), distinguishing it from sibling tools like create_poll or get_results. It provides a precise verb+resource combination that leaves no ambiguity about the tool's function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('Cast a vote on a scheduling poll') and mentions prerequisites ('Requires participant name and email', 'Use the participation token from the poll's participate URL'). However, it doesn't explicitly state when NOT to use it or name alternatives among siblings (e.g., when to use close_poll instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!