Skip to main content
Glama

Server Details

Buy & manage domains from any AI chat: availability, register, DNS, email forwarding, AI bot stats.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
danboabes/mcpdomain
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap. For example, check_domain_availability verifies availability, register_new_domain handles registration, and configure_domain_dns manages DNS settings, making misselection unlikely.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with underscores, such as check_domain_availability, configure_domain_dns, and get_my_domain_details. This uniformity enhances readability and predictability.

Tool Count5/5

With 7 tools, the set is well-scoped for domain management, covering availability checks, registration, DNS configuration, email setup, transfers, suggestions, and status lookups, each serving a necessary function.

Completeness5/5

The toolset provides complete lifecycle coverage for domain management, including discovery (suggest_available_domains), verification (check_domain_availability), registration (register_new_domain), configuration (configure_domain_dns, configure_domain_email), transfer (transfer_existing_domain), and monitoring (get_my_domain_details), with no apparent gaps.

Available Tools

7 tools
check_domain_availabilityAInspect

Check whether a specific internet domain name is available for registration. Returns availability status, price, and alternatives if taken. WHEN TO USE: user asks 'is X.com available?' or 'can I register Y.io?'. ALWAYS call this before register_new_domain.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesComplete domain name with TLD, e.g. 'sweetcrumbs.com'
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (checks availability), what it returns (status, price, alternatives), and its relationship to other operations (should be called before registration). However, it lacks details on potential errors, rate limits, or authentication needs, which would be helpful for a tool interacting with external services.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage guidelines, all in three concise sentences. Each sentence adds clear value without redundancy, making it efficient and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (checking domain availability with external dependencies), no annotations, and no output schema, the description does a good job covering purpose, usage, and basic behavior. However, it could improve by mentioning output structure (e.g., JSON format) or error handling, which would enhance completeness for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'domain' parameter well-documented in the schema itself. The description does not add any additional parameter semantics beyond what the schema provides, such as format constraints or examples not already covered. This meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose with a specific verb ('check') and resource ('internet domain name'), and distinguishes it from siblings by mentioning it returns availability status, price, and alternatives. It clearly differentiates from tools like 'register_new_domain' and 'suggest_available_domains' by focusing on checking a specific domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines with 'WHEN TO USE' examples (e.g., user asks 'is X.com available?') and clear directives ('ALWAYS call this before register_new_domain'), effectively distinguishing when to use this tool versus alternatives like 'register_new_domain' or 'suggest_available_domains'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

configure_domain_dnsAInspect

Add/update DNS records (A, CNAME, MX, TXT). Use to point domain to Vercel, Netlify, GitHub Pages etc. WHEN TO USE: user wants to connect domain to hosting.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesA domain previously registered via register_new_domain.
recordsYesOne or more DNS records to add or update. Existing records with the same type+name combination are replaced.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It states the tool 'Add/update DNS records' which implies mutation, but doesn't disclose behavioral traits like required permissions, whether changes are reversible, or potential impact on existing services. The description adds some context about common use cases but lacks details on rate limits, authentication needs, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that each serve clear purposes. The first states the tool's function with examples, the second provides usage guidance. There's no wasted text, though it could be slightly more structured with clearer separation between purpose and guidelines.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with no annotations and no output schema, the description provides adequate basic information about what the tool does and when to use it. However, it lacks important contextual details about the mutation's behavior, potential side effects, or what the tool returns. The 100% schema coverage helps, but the description itself doesn't fully compensate for the missing behavioral transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters (domain and records). The description adds minimal value beyond what's in the schema - it mentions the same record types and use cases already covered in the schema descriptions. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Add/update DNS records') and resource ('DNS records'), listing the record types (A, CNAME, MX, TXT). It distinguishes from siblings by focusing on DNS configuration rather than domain registration, email setup, or availability checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly includes 'WHEN TO USE: user wants to connect domain to hosting' and provides concrete examples (Vercel, Netlify, GitHub Pages). This gives clear context for when to select this tool over alternatives like configure_domain_email or register_new_domain.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

configure_domain_emailAInspect

Set up email forwarding for a registered domain. Forward any@domain to Gmail/Outlook. No MX records needed. WHEN TO USE: user just registered a domain, or asks about professional email.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesA domain previously registered via register_new_domain, e.g. 'sweetcrumbs.com'.
aliasesNoSpecific forwarding rules instead of a catch-all. Each rule routes one address @domain to one destination inbox.
catch_all_toNoForward ALL emails sent to any address @domain to this inbox (e.g. 'you@gmail.com'). Mutually exclusive with 'aliases' — use one or the other.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: 'Forward any@domain to Gmail/Outlook' (functionality), 'No MX records needed' (implementation detail), and implies mutation (setup/configuration). However, it doesn't mention permissions needed, rate limits, or what happens if domain isn't registered, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two sentences: first states purpose and key constraint, second provides explicit usage guidance. Every sentence earns its place with no wasted words, and it's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a configuration tool with no annotations and no output schema, the description provides good context: purpose, usage guidelines, and key behavioral constraint. It could be more complete by mentioning error conditions or response format, but covers the essential context given the structured data available.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain domain format or alias rules further). Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Set up email forwarding') and resource ('for a registered domain'), distinguishing it from sibling tools like configure_domain_dns (DNS configuration) or register_new_domain (domain registration). It specifies forwarding to Gmail/Outlook and mentions no MX records needed, providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit 'WHEN TO USE' guidance: 'user just registered a domain, or asks about professional email.' This directly tells the agent when to invoke this tool versus alternatives, providing clear context for application.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_my_domain_detailsAInspect

Look up the registration status of a domain you previously registered through MCPDomain. Returns: registered_at, expires_at, current nameservers, and email-forwarding status (active / pending verification / pending NS propagation / etc). WHEN TO USE: user asks 'is my domain ready?', 'what's the status of X?', 'did email forwarding finish setting up?'.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesA domain previously registered via register_new_domain
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by specifying the return data structure (registered_at, expires_at, nameservers, email-forwarding status) and the various status states (active/pending verification/pending NS propagation). However, it doesn't mention potential error conditions, rate limits, or authentication requirements, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two sentences: the first explains what the tool does and what it returns, the second provides clear usage examples. Every sentence adds value, and information is front-loaded with the core functionality stated first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read operation with no output schema, the description provides substantial context: purpose, return values, usage examples, and domain constraints. It effectively compensates for the lack of annotations and output schema. The only minor gap is the absence of error case information, but overall it's quite complete for this tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'domain' parameter. The description adds some context by specifying it must be 'a domain you previously registered via register_new_domain', which provides additional semantic meaning beyond the schema's basic description. This meets the baseline expectation when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Look up the registration status') and resource ('a domain you previously registered through MCPDomain'), distinguishing it from siblings like check_domain_availability (which checks availability) or register_new_domain (which creates new registrations). It explicitly identifies the tool's function as retrieving status information for existing domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool with concrete examples: 'user asks "is my domain ready?", "what's the status of X?", "did email forwarding finish setting up?"' This clearly distinguishes it from alternatives like configure_domain_dns or configure_domain_email which are for configuration actions rather than status queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_new_domainAInspect

Register a new domain. Returns a Stripe checkout URL for payment. After payment, domain is registered with FREE email forwarding, DNS, and AI bot monitoring. ALWAYS call check_domain_availability first. Collect first_name, last_name, email from user before calling.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearsNoRegistration period in years. Default: 1. Most TLDs allow 1-10 years.
domainYesFull domain name to register, e.g. 'sweetcrumbs.com'. Must first be verified free via check_domain_availability.
registrantYesLegal registrant for the domain. Appears in WHOIS (private by default where the TLD supports it) and receives ICANN verification and renewal emails.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: returns a Stripe checkout URL for payment, includes free services post-payment, and mentions ICANN verification and renewal emails through the registrant object. However, it doesn't explicitly mention mutation nature or potential side effects beyond payment, leaving some behavioral aspects implicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences that each serve distinct purposes: stating the core function, detailing included services, and providing critical usage guidelines. There's no wasted language, and important information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with no annotations and no output schema, the description provides good contextual coverage including payment flow, included services, and prerequisites. However, it doesn't fully describe the return value (what the Stripe checkout URL looks like or how to handle it) or potential error conditions, leaving some gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds minimal parameter semantics beyond the schema, mainly emphasizing the need to verify domain availability first and collect user data, but doesn't provide additional context about parameter interactions or usage patterns that aren't already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Register a new domain'), distinguishes it from siblings by mentioning payment and included services (email forwarding, DNS, AI bot monitoring), and provides a distinct verb+resource combination that differentiates it from tools like 'check_domain_availability' or 'transfer_existing_domain'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('ALWAYS call check_domain_availability first') and provides prerequisites ('Collect first_name, last_name, email from user before calling'), giving clear guidance on sequencing and data requirements that distinguish it from alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

suggest_available_domainsAInspect

Generate creative domain name suggestions from keywords or business description, with real-time availability checks. WHEN TO USE: user says 'help me find a domain for my bakery' or 'what domain should I use for X?'.

ParametersJSON Schema
NameRequiredDescriptionDefault
tldsNoTLDs to search. Default: ['.com','.co','.io']
keywordsYesBusiness name or description, e.g. 'sweet crumbs bakery'
max_resultsNoMax suggestions. Default: 8
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: the tool generates creative suggestions and performs real-time availability checks. However, it lacks details on rate limits, error handling, or the format of returned suggestions, which are important for a tool with no output schema. The description adds value but is incomplete for behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a structured 'WHEN TO USE' section. Every sentence earns its place by providing essential information without redundancy, making it efficient and well-organized for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, 100% schema coverage, no output schema), the description is mostly complete. It clearly states the purpose and usage guidelines but lacks details on output format or behavioral constraints like rate limits. With no output schema, more information on return values would improve completeness, but it's adequate for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description does not add any parameter-specific semantics beyond what the schema provides (e.g., it doesn't explain how 'keywords' are processed or the impact of 'tlds' on suggestions). Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('generate creative domain name suggestions') and resources ('from keywords or business description'), and distinguishes it from siblings by emphasizing suggestion generation rather than availability checking, configuration, or registration. It explicitly mentions 'real-time availability checks' as a feature, not the primary function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes an explicit 'WHEN TO USE' section with concrete examples ('user says 'help me find a domain for my bakery' or 'what domain should I use for X?''), providing clear context for when to invoke this tool. It implicitly distinguishes from siblings like 'check_domain_availability' by focusing on creative generation rather than direct availability queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transfer_existing_domainAInspect

Transfer a domain from another registrar (GoDaddy, Namecheap etc) to MCPDomain. Requires EPP/auth code. Transfer takes 5-7 days and adds 1 year to expiry.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain to transfer in, e.g. 'example.com'. Must be at least 60 days old since registration or a previous transfer, and must be unlocked at the losing registrar.
auth_codeYesEPP / authorization code from the current registrar. Also called 'transfer secret', 'auth info', or 'domain secret' in some registrar UIs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the time frame ('Transfer takes 5-7 days'), the effect on expiry ('adds 1 year to expiry'), and the requirement ('Requires EPP/auth code'). However, it does not mention potential costs, error conditions, or what happens during the transfer period.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of just two sentences that efficiently convey purpose, requirements, and key behavioral details. Every word earns its place, with no redundant information or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (domain transfer operation), lack of annotations, and no output schema, the description does a good job covering essential aspects like purpose, requirements, and timeline. However, it lacks details on return values, error handling, or confirmation of success, which would be helpful for a mutation tool with no structured output information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description does not add any additional meaning or context beyond what is in the schema descriptions, such as clarifying parameter interactions or usage nuances. The baseline score of 3 is appropriate when the schema handles all parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Transfer a domain from another registrar to MCPDomain') and distinguishes it from sibling tools like 'register_new_domain' and 'check_domain_availability'. It specifies the source ('GoDaddy, Namecheap etc') and destination ('MCPDomain'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Transfer a domain from another registrar') and mentions prerequisites ('Requires EPP/auth code'), but it does not explicitly state when NOT to use it or name alternatives like 'register_new_domain' for new domains. The context is sufficient but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.