instadomain
Server Details
Domain registration for AI agents via Stripe or x402 crypto with Cloudflare DNS.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.7/5 across 9 of 9 tools scored.
Each tool has a clearly distinct purpose with no overlap: check_domain/bulk for availability, buy_domain/crypto for purchase methods, get_domain_status for monitoring, renew_domain for renewal, unlock_domain/get_transfer_code for transfer operations, and suggest_domains for discovery. The two purchase tools are clearly differentiated by payment method (Stripe vs crypto).
All tools follow a consistent verb_noun naming pattern (e.g., check_domain, buy_domain, renew_domain, unlock_domain). The only variation is check_domains_bulk (plural noun) and buy_domain_crypto (payment method suffix), which are logical extensions that maintain readability and consistency.
With 9 tools, this server is well-scoped for domain registration and management. Each tool serves a specific, necessary function in the domain lifecycle from discovery to purchase, renewal, and transfer, with no redundant or trivial tools.
The tool set provides complete coverage for the domain registration domain: discovery (suggest_domains, check_domains_bulk), purchase (buy_domain, buy_domain_crypto), status monitoring (get_domain_status), renewal (renew_domain), and transfer preparation (unlock_domain, get_transfer_code). No obvious gaps exist for core workflows.
Available Tools
9 toolsbuy_domainAInspect
Start the purchase flow for an available domain via Stripe checkout.
IMPORTANT: Before calling this tool, you MUST first call check_domain to get the price, then clearly show the user the price and get their explicit confirmation before proceeding. Never call buy_domain without the user seeing and approving the price first.
The registrant contact details are required because the domain will be registered in the buyer's name (they become the legal owner). WHOIS privacy is enabled by default, so these details are not publicly visible.
Creates a Stripe checkout session. Returns a checkout URL that the user should open in their browser to complete payment securely via Stripe, plus the order ID for tracking.
Args: domain: The domain to purchase (e.g. "coolstartup.com"). first_name: Registrant's first name. last_name: Registrant's last name. email: Registrant's email address. address1: Registrant's street address. city: Registrant's city. state: Registrant's state or province. postal_code: Registrant's postal/zip code. country: 2-letter ISO country code (e.g. "US", "GB", "DE"). phone: Phone number in format +1.5551234567. org_name: Organization name (optional, leave empty for individuals).
Returns: Dict with order_id, checkout_url, price_cents, and price_display.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | ||
| Yes | |||
| phone | Yes | ||
| state | Yes | ||
| domain | Yes | ||
| country | Yes | ||
| address1 | Yes | ||
| org_name | No | ||
| last_name | Yes | ||
| first_name | Yes | ||
| postal_code | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool initiates a purchase flow (implying a financial transaction), requires registrant contact details for legal ownership, enables WHOIS privacy by default, creates a Stripe checkout session, and returns a checkout URL for browser-based payment. It doesn't mention rate limits or error conditions, but covers the essential transactional behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose, important prerequisites, behavioral context, parameter explanations, and return values. While comprehensive, it's appropriately sized for a complex financial transaction tool. Every sentence adds value, though it could be slightly more concise in the parameter explanations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (financial transaction with 11 parameters), no annotations, and the presence of an output schema, the description is remarkably complete. It covers prerequisites, behavioral context, parameter semantics, and references the return structure. The output schema handles return values, so the description appropriately focuses on usage guidance and parameter meaning.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 11 parameters, the description fully compensates by providing a detailed 'Args' section that explains each parameter's purpose and format. It clarifies that 'org_name' is optional for individuals, specifies the phone format, explains that country uses ISO codes, and contextualizes why registrant details are required (for legal ownership). This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Start the purchase flow for an available domain via Stripe checkout') and distinguishes it from siblings like 'check_domain' (which checks price) and 'buy_domain_crypto' (which uses cryptocurrency). It explicitly mentions the resource (domain) and the mechanism (Stripe checkout).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage instructions: 'Before calling this tool, you MUST first call check_domain to get the price, then clearly show the user the price and get their explicit confirmation before proceeding.' It also distinguishes when to use this tool versus alternatives by naming 'check_domain' as a prerequisite and implicitly differentiating from 'buy_domain_crypto'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
buy_domain_cryptoAInspect
Start the purchase flow for a domain using USDC crypto payment (x402 protocol).
This is a 2-step process for autonomous agent payments:
Step 1: Call this tool to get an order_id and pay_url. Step 2: Make an HTTP GET request to the pay_url. Your x402-enabled HTTP client will receive an HTTP 402 response with payment requirements, then automatically pay with USDC on Base. The payment and settlement happen via the x402 protocol (no browser or human needed).
After payment, call get_domain_status(order_id) to poll until complete.
Requires: An x402-compatible HTTP client with a funded USDC wallet on Base.
The registrant contact details are required because the domain will be registered in the buyer's name (they become the legal owner). WHOIS privacy is enabled by default, so these details are not publicly visible.
IMPORTANT: Before calling this tool, you MUST first call check_domain to get the price and confirm it with the user.
Args: domain: The domain to purchase (e.g. "coolstartup.com"). first_name: Registrant's first name. last_name: Registrant's last name. email: Registrant's email address. address1: Registrant's street address. city: Registrant's city. state: Registrant's state or province. postal_code: Registrant's postal/zip code. country: 2-letter ISO country code (e.g. "US", "GB", "DE"). phone: Phone number in format +1.5551234567. org_name: Organization name (optional, leave empty for individuals).
Returns: Dict with order_id, pay_url (full URL to GET with x402 client), price_usdc, price_cents, network, and asset contract address.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | ||
| Yes | |||
| phone | Yes | ||
| state | Yes | ||
| domain | Yes | ||
| country | Yes | ||
| address1 | Yes | ||
| org_name | No | ||
| last_name | Yes | ||
| first_name | Yes | ||
| postal_code | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and excels at behavioral disclosure. It explains the 2-step process, payment mechanism (HTTP 402 response, automatic USDC payment on Base), post-payment workflow (polling with get_domain_status), default WHOIS privacy, and legal ownership implications. It also mentions the autonomous agent payment capability and x402 protocol specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (overview, 2-step process, requirements, important note, args, returns). Every sentence adds value: no repetition, no fluff. It's appropriately detailed for a complex financial transaction tool while remaining focused and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 11-parameter tool with no annotations but with output schema, the description provides complete context. It covers the multi-step workflow, prerequisites, payment mechanism, post-payment actions, parameter semantics, and references sibling tools. The output schema handles return values, so the description appropriately focuses on process and context rather than duplicating return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 11 parameters, the description compensates well by explaining why registrant details are required ('domain will be registered in the buyer's name'), format guidance for phone ('+1.5551234567') and country ('2-letter ISO'), and clarifies optionality of org_name. However, it doesn't provide format details for all parameters like address1 or postal_code.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Start the purchase flow for a domain using USDC crypto payment (x402 protocol).' It specifies the action (purchase flow), resource (domain), and method (USDC crypto via x402 protocol). It distinguishes from sibling 'buy_domain' by explicitly mentioning the crypto payment method.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'Before calling this tool, you MUST first call check_domain to get the price and confirm it with the user.' It also references the alternative 'buy_domain' (implied non-crypto version) and specifies prerequisites like requiring an x402-compatible HTTP client with funded USDC wallet. It clearly states when to use this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_domainAInspect
Check if a domain is available for purchase and get its price.
Always call this before buy_domain. Show the user the price_display value (e.g. "$18.12") and confirm they want to proceed before buying.
Args: domain: The full domain name to check (e.g. "coolstartup.com").
Returns: Dict with availability status, price in cents, and formatted price. If available, includes price_cents and price_display for the 1-year registration cost.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It effectively describes key behaviors: this is a read-only check operation (implied by 'check' and 'get its price'), it returns pricing information, and it has a specific workflow relationship with buy_domain. However, it doesn't mention potential rate limits, authentication requirements, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise: purpose statement first, usage guidelines second, parameter documentation third, and return value explanation last. Every sentence earns its place with no wasted words, and information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, no annotations, and the presence of an output schema, the description provides excellent contextual completeness. It explains the purpose, usage workflow, parameter meaning, and return value structure. The output schema existence means the description doesn't need to detail the exact return format, which it appropriately doesn't over-specify.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must compensate. It provides clear parameter semantics: 'domain: The full domain name to check (e.g. "coolstartup.com")' gives both the parameter name and an example format. This adequately documents the single parameter beyond what the bare schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('check if a domain is available for purchase and get its price'), identifies the resource (domain), and distinguishes it from siblings by focusing on single-domain availability checking versus bulk operations (check_domains_bulk) or other domain management functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Always call this before buy_domain' establishes a clear prerequisite relationship, and 'Show the user the price_display value...and confirm they want to proceed before buying' gives specific implementation instructions. This distinguishes when to use this tool versus alternatives like buy_domain directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_domains_bulkAInspect
Check availability of up to 50 domain names in one call.
Uses fast RDAP lookups (no pricing). Returns a summary with total/available/taken counts plus per-domain details and affiliate registration links for available domains.
Args: domains: List of domain names to check (max 50).
| Name | Required | Description | Default |
|---|---|---|---|
| domains | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: the operation method ('fast RDAP lookups'), cost implication ('no pricing'), response content ('summary with total/available/taken counts plus per-domain details and affiliate registration links'), and capacity limit ('up to 50 domain names'). It lacks details on error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. Every sentence adds value: the second explains the method and output, and the third clarifies the parameter. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (bulk operation with one parameter), no annotations, and an output schema (which handles return values), the description is mostly complete. It covers purpose, usage, behavior, and parameters well, but could benefit from mentioning prerequisites like domain format or error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics beyond the schema by explaining the parameter's purpose ('List of domain names to check'), format constraints ('max 50'), and usage context, though it doesn't specify domain name format requirements or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('check availability') and resources ('domain names'), explicitly distinguishes it from siblings by mentioning bulk capability ('up to 50 domain names in one call'), and contrasts with pricing-based tools ('no pricing').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (bulk checking up to 50 domains with fast RDAP lookups) and implicitly suggests alternatives for single-domain checks (sibling 'check_domain'), but does not explicitly state when NOT to use it or compare with all relevant siblings like 'get_domain_status'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_domain_statusAInspect
Get the status of a domain purchase order.
Polls the backend every 3 seconds (up to 120 seconds) until the order reaches a terminal state (complete or failed). Returns the final order status including nameservers and DNS token if available.
Args: order_id: The order ID returned from buy_domain (e.g. "ord_abc123").
Returns: Dict with order status, domain, nameservers, and CF DNS token if complete.
| Name | Required | Description | Default |
|---|---|---|---|
| order_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does so effectively. It discloses critical behavioral traits: polling behavior (3-second intervals, 120-second timeout), terminal state conditions (complete/failed), and return content (nameservers, DNS token). This goes well beyond what a basic 'get status' description would provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement, behavioral details, and separate Args/Returns sections. Every sentence adds value: the polling behavior is essential context, and the parameter/return explanations are necessary given the lack of schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (polling behavior, terminal states) and the presence of an output schema, the description is complete. It explains the tool's purpose, when to use it, its polling behavior, parameter meaning, and return content - covering all essential aspects despite no annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and only 1 parameter, the description fully compensates by explaining the 'order_id' parameter's purpose ('returned from buy_domain'), format example ('ord_abc123'), and relationship to sibling tools. It adds meaningful context beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get the status'), target resource ('domain purchase order'), and behavior ('Polls the backend every 3 seconds... until terminal state'). It distinguishes from siblings like 'check_domain' by focusing on purchase order status rather than domain availability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: after a domain purchase (referencing 'order_id returned from buy_domain'). However, it doesn't mention when NOT to use it or alternatives for checking domain status outside purchase contexts (like 'check_domain').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transfer_codeAInspect
Get the EPP/transfer authorization code for a completed domain purchase.
Use this when the domain owner wants to transfer their domain to another registrar. The order must be in "complete" status. The auth code is required by the receiving registrar to authorize the transfer.
Args: order_id: The order ID of a completed domain purchase.
Returns: Dict with order_id, domain, and auth_code.
| Name | Required | Description | Default |
|---|---|---|---|
| order_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior by specifying the required order status ('complete' status) and the purpose of the output ('auth code is required... to authorize the transfer'), though it doesn't mention potential errors, rate limits, or authentication needs. This provides good context but leaves some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by usage guidelines and parameter/return details. Every sentence earns its place by providing necessary information without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (1 parameter, no annotations, but with an output schema), the description is complete. It covers purpose, usage conditions, parameter meaning, and return values. The presence of an output schema means the description doesn't need to detail return structure, and it adequately addresses the gaps in schema coverage and lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, so the description must fully compensate. It clearly explains the single parameter ('order_id: The order ID of a completed domain purchase'), adding essential meaning beyond the bare schema. This fully addresses the parameter semantics gap caused by the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get the EPP/transfer authorization code') and resource ('for a completed domain purchase'), distinguishing it from siblings like 'get_domain_status' or 'unlock_domain' which involve different operations. It precisely defines the tool's function without being vague or tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('when the domain owner wants to transfer their domain to another registrar') and includes prerequisites ('The order must be in "complete" status'). It also explains the purpose of the auth code ('required by the receiving registrar to authorize the transfer'), providing clear context for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
renew_domainAInspect
Renew a domain for 1 additional year.
Creates a Stripe checkout session for the renewal payment. The user must open the checkout URL to complete payment, after which the domain will be renewed automatically via the registrar.
The order must be in "complete" status (i.e., the domain was previously registered successfully).
Args: order_id: The order ID of a completed domain purchase (e.g. "ord_abc123").
Returns: Dict with order_id, checkout_url, price_cents, price_display, domain, and renewal_years.
| Name | Required | Description | Default |
|---|---|---|---|
| order_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the multi-step process (creates Stripe checkout session, requires user to open URL, automatic renewal after payment), including payment requirements and prerequisites (order must be 'complete'). However, it doesn't mention error conditions or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. Each sentence earns its place: first states the action, then explains the payment flow, then provides prerequisites, and finally documents parameters and returns. No redundant information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 1 parameter, no annotations, 0% schema coverage, but with output schema, the description provides excellent coverage of the action, process, prerequisites, and parameter semantics. The output schema handles return values, so the description appropriately focuses on behavior. It could mention error cases but is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must fully compensate. It provides specific semantic meaning for the single parameter: 'order_id: The order ID of a completed domain purchase (e.g. "ord_abc123")', including format examples and the requirement that it must be from a completed purchase. This adds substantial value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Renew a domain for 1 additional year') and distinguishes it from siblings like 'buy_domain' (new purchase) and 'get_domain_status' (status check). It explicitly mentions the resource (domain) and the duration (1 year), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: when a domain order is in 'complete' status and renewal is needed. It implicitly distinguishes from 'buy_domain' (for new purchases) and 'check_domain' (for availability checks), but doesn't explicitly list all alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggest_domainsAInspect
Generate domain name ideas from a keyword and check their availability.
Uses common prefix/suffix patterns to generate 10-15 domain candidates across .com, .io, .ai, .dev, .co and checks all of them via fast RDAP lookups. Returns available domains with affiliate registration links.
Args: keyword: A keyword or short business name (e.g. "taskflow").
| Name | Required | Description | Default |
|---|---|---|---|
| keyword | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: generation method ('common prefix/suffix patterns'), output volume ('10-15 domain candidates'), TLDs checked ('.com, .io, .ai, .dev, .co'), checking mechanism ('fast RDAP lookups'), and return format ('available domains with affiliate registration links'). It doesn't mention rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a high-level summary sentence followed by implementation details and parameter documentation. Every sentence adds value, with no redundant information, and it's appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, no annotations, and the presence of an output schema, the description is complete enough. It covers purpose, behavior, parameters, and output format without needing to duplicate what the output schema will provide about return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and only one parameter, the description fully compensates by providing clear semantics for the 'keyword' parameter: 'A keyword or short business name (e.g. "taskflow")'. This adds essential meaning beyond the bare schema type definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('generate domain name ideas', 'check their availability') and resources ('domain candidates', 'affiliate registration links'). It distinguishes from siblings like 'check_domain' by emphasizing generation of multiple candidates and availability checking across multiple TLDs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('from a keyword') and implies when not to use it (for single-domain checking or bulk operations without generation). However, it doesn't explicitly name alternative tools like 'check_domain' or 'check_domains_bulk' for comparison.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
unlock_domainAInspect
Remove the registrar transfer lock from a completed domain purchase.
Domains are locked by default to prevent unauthorized transfers. Call this before initiating a transfer to another registrar. The order must be in "complete" status.
Args: order_id: The order ID of a completed domain purchase.
Returns: Dict with order_id, domain, and unlocked status.
| Name | Required | Description | Default |
|---|---|---|---|
| order_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden and does well: it discloses that this is a mutation (removing a lock), mentions the default security behavior (domains locked to prevent unauthorized transfers), and specifies a prerequisite status. It doesn't cover rate limits, auth needs, or error cases, but provides solid operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with the core purpose, followed by context, usage guidance, and structured Arg/Return sections. Every sentence earns its place—no fluff, well-organized for quick parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (1 param, mutation), no annotations, but with an output schema (so return values are documented), the description is complete: it covers purpose, when to use, prerequisites, parameter meaning, and behavioral context. No significant gaps for an agent to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description compensates by explaining the parameter's meaning ('The order ID of a completed domain purchase') and tying it to the prerequisite status. It adds value beyond the bare schema, though it doesn't detail format (e.g., numeric vs. string).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Remove the registrar transfer lock') on a specific resource ('from a completed domain purchase'), distinguishing it from sibling tools like 'buy_domain', 'renew_domain', or 'get_transfer_code'. It explains the default locking behavior and the tool's purpose in enabling transfers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided on when to use this tool ('Call this before initiating a transfer to another registrar') and prerequisites ('The order must be in "complete" status'). It also implies when not to use it (e.g., for incomplete orders or non-transfer scenarios), though alternatives aren't named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!