Zyntra - Temp e-mails MCP
Server Details
MCP server for e-mail testing: create disposable inboxes, wait for delivery, and extract e-mail content or links - all from your AI agent or test automation workflow.
Get a free API key on https://app.zyntra.app/
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 5 of 5 tools scored.
All five tools target distinct operations: listing emails, retrieving a specific email, getting the last email, deleting an email, and listing attachments for an email. There is no overlap in functionality.
All tool names follow a consistent verb_noun pattern in snake_case (delete_email, get_email, get_last_email, list_attachments, list_emails), making them predictable and easy to understand.
With 5 tools, the server is well-scoped for a temporary email service. Each tool covers an essential task (list, get, delete, attachments) without unnecessary duplication or gaps.
The tool set covers the core lifecycle of a temporary inbox: listing, retrieving, deleting emails, and handling attachments. There are no obvious missing operations for the stated purpose of managing temp emails.
Available Tools
5 toolsdelete_emailADestructiveIdempotentInspect
Delete an email from a zyntramail.com inbox. Example: "Delete email 550e8400 from teamId.test@zyntramail.com". Requires an X-API-Key from app.zyntra.app.
| Name | Required | Description | Default |
|---|---|---|---|
| inbox | Yes | Full inbox address, e.g. teamId.test@zyntramail.com | |
| message_uuid | Yes | UUID of the email to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint: true and idempotentHint: true. The description adds the requirement of an API key, which is valuable behavioral context beyond the annotations. There is no contradiction between description and annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences plus an example, each sentence serving a purpose (main function, authentication requirement). No redundant information, and the core function is stated first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple delete operation with annotations already covering safety (destructive, idempotent) and schema covering parameters, the description is complete. It mentions the authentication requirement and gives an example, which is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for both parameters. The description does not add additional semantic meaning beyond the schema, but the example illustrates their usage. Baseline score of 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Delete an email from a zyntramail.com inbox' with a specific verb and resource. It includes an example that demonstrates usage, and it is easily distinguished from sibling tools which are all about retrieval (get_email, get_last_email, list_attachments, list_emails).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes that the tool requires an X-API-Key from app.zyntra.app, providing a prerequisite context. While it doesn't explicitly state when not to use or name alternatives, the sibling tools are all retrieval-focused, so usage as a deletion tool is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_emailARead-onlyIdempotentInspect
Get full content of a specific email from a zyntramail.com inbox, including HTML body, headers, and attachment IDs. Example: "Get the content of email 550e8400 in teamId.test@zyntramail.com". Requires an X-API-Key from app.zyntra.app.
| Name | Required | Description | Default |
|---|---|---|---|
| inbox | Yes | Full inbox address, e.g. teamId.test@zyntramail.com | |
| message_uuid | Yes | UUID of the email to retrieve |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, destructiveHint, idempotentHint. The description adds the requirement for an X-API-Key and specifies return content (HTML body, headers, attachment IDs), which is useful behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus an example, front-loaded with the verb 'Get', no redundant information. Every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with two required parameters and annotations covering safety, the description adequately explains what is fetched (body, headers, attachment IDs). Missing output schema is not a penalty since none exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with descriptions. The description adds an example demonstrating real-world usage of inbox and message_uuid, providing useful context beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets full content of a specific email, including HTML body, headers, and attachment IDs, which distinguishes it from siblings like 'list_emails' or 'delete_email'. An example further clarifies usage.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions an API key requirement but provides no guidance on when to use this tool over alternatives like 'get_last_email' or 'list_emails'. No explicit when-not or comparison to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_last_emailARead-onlyInspect
Get the most recently received email for a zyntramail.com inbox. Ideal for automation: "Get the last email in teamId.test@zyntramail.com and extract the verification code". Requires an X-API-Key from app.zyntra.app.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Full inbox address, e.g. teamId.test@zyntramail.com |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses the need for an X-API-Key, which is a behavioral constraint. Annotations already indicate read-only and non-destructive nature; description adds authentication context. Does not mention rate limits or empty inbox behavior, but core safety is clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with a clear purpose and a concrete example. No fluff, front-loaded with critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema is provided, and the description does not describe the return structure (likely an email object). Given the tool's simplicity and sibling context, this is a minor gap but still incomplete for full agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'email' parameter already well-described. The description includes an example address that reinforces the schema but adds no new semantic meaning beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves the most recent email for a zyntramail.com inbox, distinguishing it from siblings like get_email (specific email) and list_emails (multiple). The verb 'get' and resource 'last email' are specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides an explicit use case for automation (extracting verification code) and mentions API key requirement. Lacks explicit instructions on when not to use or comparison to alternatives, but context hints are sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_attachmentsARead-onlyIdempotentInspect
List all attachments for a specific zyntramail.com email. Returns filename, MIME type, and ID for each. Example: "List attachments in email 550e8400". Requires an X-API-Key from app.zyntra.app.
| Name | Required | Description | Default |
|---|---|---|---|
| message_uuid | Yes | UUID of the email |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, idempotent, and non-destructive. The description adds minimal behavioral context beyond the return fields and domain restriction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus an example: very concise and front-loaded. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool (1 param, no output schema), the description covers purpose, returns, and a requirement. Missing error handling or domain specificity, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% with clear description for 'message_uuid'. The description reinforces by specifying domain and providing an example, but doesn't add significant new semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (list), resource (attachments for a specific email), and return values (filename, MIME type, ID). It distinguishes from sibling tools like list_emails and delete_email.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides an example and mentions the requirement for an X-API-Key, but does not explicitly compare to alternatives or state when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_emailsARead-onlyIdempotentInspect
List emails in a Zyntra mailbox (zyntramail.com). Example: "List the last 10 emails in teamId.support@zyntramail.com". Requires an X-API-Key from app.zyntra.app.
| Name | Required | Description | Default |
|---|---|---|---|
| skip | No | Emails to skip for pagination (default: 0) | |
| inbox | Yes | Full inbox address, e.g. teamId.test@zyntramail.com | |
| limit | No | Emails to return, max 100 (default: 20) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, destructiveHint. The description adds authentication requirements and the specific mailbox domain, providing useful context beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with a concrete example. No extraneous content. Information is front-loaded and every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity and full schema coverage, the description covers the tool's purpose, example usage, and authentication. It does not explain return format or pagination details, but those are in the schema. Adequate for the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description provides an example ('List the last 10 emails') that demonstrates parameter usage, adding clarity beyond the schema's standalone descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List emails in a Zyntra mailbox' with a domain-specific example. It distinguishes from siblings like delete_email, get_email, etc. by indicating it lists multiple emails.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a prerequisite ('Requires an X-API-Key') but does not provide explicit guidance on when to use this tool versus alternatives such as get_email or list_attachments. Usage is implied from the example but not formally stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!