Skip to main content
Glama

Server Quality Checklist

75%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a clearly distinct purpose with no overlap: create_signup_inbox sets up the inbox, wait_for_verification_email monitors for arrival, get_latest_email retrieves content, extract_otp_code and extract_verification_link parse specific elements, and complete_signup_flow orchestrates the workflow. The separation between extraction tools (OTP vs. link) and monitoring vs. retrieval is unambiguous.

    Naming Consistency5/5

    All tools follow a consistent verb_noun pattern with snake_case throughout (e.g., create_signup_inbox, extract_otp_code, wait_for_verification_email). The naming is predictable and readable, using clear action verbs aligned with each tool's function without any stylistic deviations.

    Tool Count5/5

    With 6 tools, the count is well-scoped for the server's purpose of handling temporary email signup workflows. Each tool earns its place by covering distinct steps in the process, from inbox creation to email extraction, without being overly sparse or bloated.

    Completeness5/5

    The tool set provides complete coverage for the email-based signup domain: it supports creating an inbox, waiting for and retrieving emails, extracting both OTP codes and verification links, and an end-to-end orchestration tool. There are no obvious gaps, and agents can handle the full lifecycle without dead ends.

  • Average 2.8/5 across 6 of 6 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 6 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but only minimally discloses behavior. While it mentions 'wait' and 'timeout', it fails to explain the polling mechanism, default intervals, what constitutes a 'verification' email match, or failure modes when timeout is reached.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is efficient and front-loaded with the key action, containing no wasted words. However, it is overly terse given the complexity of the tool and lack of supporting documentation elsewhere.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite having an output schema, the description is incomplete for a 5-parameter polling tool with zero schema coverage. It omits critical context about the polling behavior, filtering logic, and integration with the verification workflow implied by sibling tools.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0%, requiring the description to compensate for all 5 parameters. It implicitly references 'timeout_seconds' via 'timeout is reached' and 'inbox_id' via context, but provides no explanation for 'poll_interval_seconds', 'subject_contains', or 'from_contains' filtering capabilities.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the core action (waiting for a verification email) and mentions the timeout mechanism, but fails to differentiate from sibling 'get_latest_email' or clarify its role in the signup flow alongside 'create_signup_inbox' and 'extract_otp_code'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this polling/waiting tool versus simply retrieving existing emails with 'get_latest_email', or how it relates to the extraction tools. No prerequisites or conditions mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. The phrase 'most likely' correctly signals probabilistic/heuristic behavior rather than deterministic extraction. However, it fails to disclose error handling (what happens if no link exists?), side effects, or return value structure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is efficient with no redundancy, but for a tool with four undocumented optional parameters and complex input logic, it is inappropriately brief and front-loaded with only the basic purpose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given four optional parameters with zero schema documentation and no annotations, the description is incomplete. It does not clarify valid parameter combinations (e.g., whether to provide raw text or inbox/message IDs) or workflow integration, though the existence of an output schema excuses it from detailing return values.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 0% description coverage and the description completely fails to compensate. It does not explain the four parameters (message_text, inbox_id, message_id, preferred_domains), their relationships (mutually exclusive vs. complementary), or how to specify the email source.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States a specific action (extract) and target (verification link from email). However, it does not explicitly differentiate from sibling extract_otp_code, though the distinction is clear from the tool name.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus siblings (e.g., when to use this instead of extract_otp_code), nor does it mention prerequisites such as needing to retrieve an email first using get_latest_email or wait_for_verification_email.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but only offers 'temporary' as behavioral context, hinting at the TTL mechanism. It fails to disclose what happens upon TTL expiration, what the output schema contains (presumably inbox credentials/address), or whether creation is idempotent.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is appropriately front-loaded with the core action. However, extreme brevity contributes to under-specification given the lack of schema descriptions and annotations. No redundancy or fluff present.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the basic purpose is covered and an output schema exists (relieving the description of return value documentation), the description remains inadequate for a tool with 0% schema coverage and complex sibling interactions. It misses critical context about parameter semantics and workflow sequencing necessary for correct agent invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0%, requiring the description to compensate for both parameters. It only vaguely implies 'service_name' via 'target service' without clarifying expected format (domain name? app name?). It completely omits 'ttl_minutes', leaving the agent unaware that null represents default/forever behavior versus explicit expiration.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action (Create), resource (temporary signup inbox), and scope (for a target service). It effectively distinguishes from siblings like 'extract_otp_code' or 'get_latest_email' by establishing this as the provisioning step, though it could explicitly state 'Use this first' to be perfect.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use versus alternatives, or prerequisites for the signup workflow. While the name 'create_signup_inbox' implies it precedes extraction tools like 'extract_otp_code', the description fails to state this relationship or mention that an output (likely an email address) is needed before proceeding to wait_for_verification_email.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full disclosure burden. While 'Extract' implies a read operation, the description fails to specify idempotency, side effects (e.g., marking emails as read), failure modes (no code found), or how it handles multiple potential codes in an inbox.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is efficiently structured and front-loaded, but inappropriately terse for a tool with five undocumented parameters. It lacks the necessary elaboration to compensate for the zero-coverage input schema.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given five parameters with no schema descriptions and a likely structured output (output schema exists), the description is insufficient. It omits critical workflow context suggested by sibling tools (signup flow integration), parameter precedence rules, and error handling behavior.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description partially compensates by mapping 'email body' to message_text and 'message reference' to inbox_id/message_id. However, it fails to explain the mutual exclusivity logic between text and IDs, the distinction between inbox_id and message_id, or the purpose of otp_length constraints.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Extract') and resource ('numeric OTP code') and distinguishes from sibling tool 'extract_verification_link' by specifying it handles codes rather than links. However, 'message reference' is slightly ambiguous regarding whether it refers to inbox_id, message_id, or both.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use this tool versus siblings (e.g., 'extract_verification_link' for links vs codes) or prerequisites (e.g., whether 'get_latest_email' or 'wait_for_verification_email' must be called first). The relationship between providing raw text vs message references is not clarified.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. While 'Read' implies a fetch operation, the description fails to disclose the side effect of the mark_as_read parameter (which can modify inbox state) or explain error conditions (e.g., empty inbox).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is front-loaded with the core action and contains no redundancy. However, given the lack of schema descriptions and annotations, the extreme brevity results in under-specification rather than efficient conciseness.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple read operation with an output schema present (which handles return value documentation). However, the lack of parameter documentation, behavioral side effects, and differentiation from extraction siblings leaves noticeable gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to compensate. It provides minimal context for inbox_id via 'in an inbox', but offers no explanation for mark_as_read (including its default false value and side effects) or inbox_id format expectations.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Read and return') and resource ('latest full email'), with 'full' usefully distinguishing it from sibling extraction tools like extract_otp_code. However, it does not explicitly name alternatives or clarify when to prefer this over specialized extraction tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus siblings (e.g., when to use get_latest_email vs extract_otp_code vs wait_for_verification_email). No prerequisites or exclusions are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and successfully discloses the multi-step internal behavior (inbox creation, polling wait, extraction). However, it omits critical behavioral details like resource lifecycle (whether the inbox persists after completion), failure modes when timeout is reached, or side effects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, dense sentence that is front-loaded with the value proposition ('End-to-end signup helper'). It avoids fluff, though it borders on under-specification given the high parameter count and complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite the existence of an output schema (reducing the need for return value documentation), the description is incomplete due to 7 undocumented parameters and lack of guidance relative to sibling tools. For a composite tool with complex filtering and lifecycle options, more context is required.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage across 7 parameters, the description fails to compensate by explaining any parameter semantics. While 'wait for email' loosely implies timeout_seconds and poll_interval_seconds, critical parameters like service_name, ttl_minutes, preferred_domains, and filter fields (subject_contains, from_contains) remain completely undocumented.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly articulates the composite workflow (create inbox → wait → extract) and positions the tool as an 'end-to-end' solution, effectively distinguishing it from atomic siblings like create_signup_inbox and extract_otp_code. Specific verbs and resources are identified.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The 'end-to-end' framing implicitly contrasts this with the single-purpose sibling tools, suggesting it should be used when full automation is desired. However, there is no explicit guidance on when to use this versus chaining individual tools, or prerequisites for the service_name parameter.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

un-correo-temporal MCP server

Copy to your README.md:

Score Badge

un-correo-temporal MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/francofuji/un-correo-temporal'

If you have feedback or need assistance with the MCP directory API, please join our Discord server