myagentinbox
Server Details
Disposable email inboxes for AI agents. Auto-deletes after 24 hours.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no ambiguity: check_inbox lists messages, create_inbox generates an inbox, download_attachment handles attachments, and read_message retrieves full message content. The descriptions reinforce these distinct roles, making misselection unlikely.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., check_inbox, create_inbox, download_attachment, read_message). This predictable naming scheme enhances readability and usability for agents.
With 4 tools, the server is well-scoped for managing a disposable email inbox, covering core operations: inbox creation, message listing, reading, and attachment handling. Each tool earns its place without being excessive or insufficient.
The tool set provides complete CRUD/lifecycle coverage for the disposable email domain: create (create_inbox), read (check_inbox, read_message, download_attachment), and implicit delete via expiry. No obvious gaps exist, ensuring agents can handle typical workflows without dead ends.
Available Tools
4 toolscheck_inboxARead-onlyIdempotentInspect
Check for messages in a disposable inbox. Returns a list of message summaries including sender, subject, and preview.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | The inbox email address (e.g. abc123@myagentinbox.com) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnly/idempotent), but description adds valuable behavioral context about return payload structure (sender, subject, preview) which isn't in annotations or schema. Could strengthen by mentioning disposable inbox lifecycle or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded action ('Check for messages'), followed by return value specification. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a simple 1-parameter read operation. Compensates for missing output schema by detailing return structure in description. Minor gap: could clarify relationship to read_message (that check returns metadata, read returns body).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear description of 'address' parameter in schema. Description text doesn't mention the parameter, but with complete schema coverage, baseline 3 is appropriate as no additional semantics are required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Check') + resource ('messages in a disposable inbox'). Explicitly distinguishes from sibling read_message by specifying it returns 'summaries' rather than full content, and implies the disposable/temporary nature linking to create_inbox workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lacks explicit when-to-use guidance or when-not-to-use relative to siblings. While 'disposable inbox' implies a workflow with create_inbox, it doesn't explicitly state prerequisites (must create first) or clarify when to use read_message instead (full content vs summaries).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_inboxAInspect
Create a disposable email inbox that expires in 24 hours. Returns the generated email address, creation time, and expiry.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral context beyond annotations: the 24-hour expiration constraint and the specific return values (email address, creation time, expiry). Annotations already cover idempotency and read-only status.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first covers creation and lifecycle, second covers return payload. Efficiently front-loaded with essential constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a zero-parameter creation tool. Compensates for missing output schema by documenting return values. Could mention side effects (sends external email) given openWorldHint=true.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, meeting the baseline score per evaluation rules. Description correctly implies no configuration is needed to generate the inbox.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Create' and resource 'disposable email inbox' with specific expiry constraint '24 hours'. Distinguished from siblings by action type, though does not explicitly reference the check/read/download workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States what the tool does but provides no explicit 'when to use' guidance, prerequisites, or workflow relationship to sibling tools (check_inbox, read_message) that would require the generated address.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
download_attachmentARead-onlyIdempotentInspect
Download an email attachment. Returns images as base64, text files inline, and binary files as download URLs.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | The inbox email address (e.g. abc123@myagentinbox.com) | |
| filename | Yes | The attachment filename from read_message results | |
| message_id | Yes | The message ID from check_inbox results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description excellently discloses polymorphic return behavior beyond what annotations provide: 'Returns images as base64, text files inline, and binary files as download URLs.' This critical output specification is not covered by annotations (which only indicate safety/readonly traits).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose, second documents critical return format behavior. Front-loaded with the action, followed by output specification. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description comprehensively covers return value behavior (three different formats based on file type). Combined with full schema coverage and complete annotations (readOnly, idempotent), no gaps remain for this tool's complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is appropriately 3. The main description does not add parameter-specific semantics, but the schema fully documents each parameter's purpose (address, message_id, filename) including their relationships to sibling tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Download') and resource ('email attachment') that clearly distinguishes it from sibling tools like 'read_message' (content/body) and 'check_inbox' (listing). The scope is precisely defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the main description text doesn't explicitly state alternatives, the parameter descriptions in the schema reference 'read_message results' and 'check_inbox results', providing clear context about the tool chain workflow. No explicit 'when not to use' guidance is present, preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_messageBRead-onlyIdempotentInspect
Read the full content of a specific email message including sender, recipients, subject, body text, and attachment info.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | The inbox email address (e.g. abc123@myagentinbox.com) | |
| message_id | Yes | The message ID from check_inbox results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true, covering safety and repeatability. The description adds value by specifying what data is returned (attachment info, body text, etc.), which contextualizes the operation, but it omits details on auth requirements, rate limits, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It front-loads the action ('Read the full content') and qualifies it with specific details, earning its length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description appropriately enumerates the returned fields to compensate. Combined with clear annotations and simple parameter schema, this provides sufficient completeness for invocation, though explicit workflow context with siblings would improve it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description does not add syntax details, format constraints, or semantic nuances beyond what the schema already provides for 'address' and 'message_id'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Read'), resource ('email message'), and scope ('full content'), and lists specific components retrieved (sender, recipients, subject, body, attachment info). This implicitly distinguishes it from sibling 'check_inbox' (likely listing) and 'download_attachment' (files vs. metadata), though it does not explicitly name these alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus siblings, nor does it mention the prerequisite workflow (obtaining message_id from check_inbox). While the parameter schema references check_inbox, the description itself lacks when/when-not guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!