Skip to main content
Glama

Name Whisper — ENS Intelligence Layer

Server Details

34 MCP tools to search, register, manage, value, and trade ENS names. AI-powered intelligence layer.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

34 tools
approve_operatorAInspect

Approve or revoke an operator for ENS contract interactions.

An approved operator can transfer ANY token owned by the approver on the specified contract. This is setApprovalForAll — it covers all tokens, not just one.

Contracts:

  • base_registrar — ERC-721 tokens (unwrapped .eth names)

  • name_wrapper — ERC-1155 tokens (wrapped names and subnames)

  • ens_registry — ENS node ownership

Common use cases:

  • Approve NameWrapper on BaseRegistrar before wrapping a name

  • Approve a marketplace contract for trading

  • Approve a management contract for batch operations

  • Revoke a previously approved operator

Contract addresses:

  • BaseRegistrar: 0x57f1887a8BF19b14fC0dF6Fd9B2acc9Af147eA85

  • NameWrapper: 0xD4416b13d2b3a9aBae7AcD5D6C2BbDBE25686401

  • ENS Registry: 0x00000000000C2E074eC69A0dFb2997BA6C7d2e1e

WARNING: Only approve addresses you trust. An approved operator can move ALL your names on that contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
ownerYesAddress granting/revoking operator approval
approvedNotrue to approve, false to revoke (default: true)
contractYesWhich contract to set approval on: base_registrar (ERC-721), name_wrapper (ERC-1155), or ens_registry
operatorYesAddress being approved/revoked as operator
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Excellently discloses: (1) scope ('transfer ANY token', 'covers all tokens, not just one'), (2) security implications ('WARNING: Only approve addresses you trust'), (3) specific contract addresses for verification, and (4) exact token standards per contract.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear hierarchy: action → behavioral scope → contract types → use cases → addresses → security warning. Every sentence earns its place; appropriately verbose for a high-stakes blockchain operation requiring security warnings.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a sensitive state-changing blockchain operation. Covers security risks, contract addresses, prerequisite workflows, and token standard distinctions. No output schema exists but description doesn't need to explain return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good base descriptions. Description adds valuable semantic context: maps abstract contract enums to ENS-specific concepts ('unwrapped .eth names', 'wrapped names'), provides actual hex addresses for verification, and clarifies the boolean toggle behavior through opening statement and examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific action 'Approve or revoke an operator for ENS contract interactions' and distinguishes clearly from sibling tools (which handle registration, transfer, wrapping) by focusing on setApprovalForAll permissions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Common use cases' section listing four specific scenarios including prerequisite workflow 'Approve NameWrapper on BaseRegistrar before wrapping a name' which clarifies relationship to wrap_name sibling, plus marketplace and batch operation examples.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bulk_registerAInspect

Register multiple ENS names in bulk — batched into just 2 transactions (commit + register) via Multicall3.

Much cheaper and faster than registering names one at a time. Supports up to 20 names per batch.

Flow: batchCommit (1 tx) → wait 60 seconds → batchRegister (1 tx with total ETH). Excess ETH is refunded.

Each name gets its own secret for front-running protection. All names must be available for registration.

After registration, use bulk_set_records to configure all names at once with addresses, text records, and content hashes.

ParametersJSON Schema
NameRequiredDescriptionDefault
namesYesArray of ENS names to register (e.g. ["coffee.eth", "tea.eth"]). Max 20 per batch.
durationYearsNoRegistration duration in years (same for all names, default: 1)
walletAddressYesWallet address that will own all registered names
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden. It effectively explains the two-phase transaction process, 60-second waiting period, excess ETH refund behavior, front-running protection (individual secrets), and ownership model. Minor gap: does not explicitly label this as a state-changing transaction requiring gas fees, though 'total ETH' implies cost.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Eight sentences efficiently structured: opening defines purpose/mechanism, followed by benefits, constraints, detailed flow, financial handling, security features, prerequisites, and next steps. Every sentence conveys distinct operational information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex blockchain operation with no output schema or annotations, the description comprehensively covers the registration lifecycle, financial implications, and sibling tool integration. Minor deduction for not describing failure modes (e.g., what happens if availability check fails during execution) or explicit return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, documenting all parameters (names array with constraints, durationYears with default, walletAddress). The description reinforces the 20-name limit and duration behavior ('same for all names') but does not add significant semantic depth beyond the comprehensive schema. Baseline 3 is appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Register[s] multiple ENS names in bulk' with specific mechanism (batched via Multicall3). It clearly distinguishes from single-name registration ('Much cheaper and faster than registering names one at a time') and explicitly references sibling tool bulk_set_records for post-registration configuration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit temporal flow (batchCommit → wait 60s → batchRegister), prerequisites ('All names must be available'), cost guidance ('Much cheaper...than one at a time'), and clear next-step reference ('After registration, use bulk_set_records'). Also specifies batch constraints (up to 20 names).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bulk_set_recordsAInspect

Set ENS resolver records for multiple names in a single transaction — bulk record editing.

All record updates across all names are batched into one resolver.multicall() call. Much cheaper than setting records one name at a time.

Supports the same record types as set_ens_records: address records (ETH, BTC, SOL, etc.), text records (avatar, description, social handles, AI agent metadata), content hash (IPFS/IPNS), and ENSIP-25 agent-registration records.

You can set different records for each name (e.g., unique avatars) or the same records across all names (e.g., same ETH address).

Max 50 names per batch. The wallet signing the transaction must be the owner/manager of all included names.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameRecordsYesArray of name + records pairs. Max 50 names per batch.
walletAddressYesWallet address that owns these names (must sign the transaction)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden effectively. It discloses the batching mechanism ('resolver.multicall()'), cost efficiency, ownership requirements, and the 50-name limit. It lacks details on atomicity (whether partial failures are possible) and does not describe the return value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five well-structured sentences with zero redundancy. Information is front-loaded with purpose, followed by technical mechanism, capabilities, usage patterns, and constraints. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity of bulk ENS mutations and absence of annotations/output schema, the description comprehensively covers operational mechanics, authorization rules, and limits. The only gap is the lack of description for what the tool returns (e.g., transaction hash).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema coverage is 100%, the description adds significant semantic value by providing concrete examples of record types ('ETH, BTC, SOL', 'avatar, description') and explaining usage patterns ('different records for each name... or the same records across all names'), which helps the agent understand how to populate the nested nameRecords structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Set'), clear resource ('ENS resolver records'), and distinct scope ('multiple names in a single transaction — bulk record editing'). It explicitly references sibling tool set_ens_records to differentiate the bulk vs. single-name use case.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use (bulk operations are 'Much cheaper than setting records one name at a time') and references the alternative tool set_ens_records. It states the 50-name limit and ownership prerequisites. However, it does not explicitly state when NOT to use it (e.g., 'do not use for single names').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bulk_transfer_ens_namesAInspect

Transfer multiple ENS names in a single transaction via Multicall3 — bulk send.

Much cheaper and faster than transferring names one at a time. Supports up to 20 names per batch.

Automatically detects whether each name is wrapped (NameWrapper/ERC-1155) or unwrapped (BaseRegistrar/ERC-721) and builds the correct transfer call for each.

All names can go to the same recipient or to different recipients — specify a toAddress per name.

Requirements:

  • The fromAddress must currently own ALL names in the batch

  • All addresses must be valid Ethereum addresses

  • Names must be registered (not expired)

WARNING: This transfers FULL ownership of every name. Recipients gain complete control.

Resolver records (avatar, addresses, etc.) are NOT affected by transfer — they stay on each name.

After transfer, consider using bulk_set_records to update ETH address records on the transferred names.

ParametersJSON Schema
NameRequiredDescriptionDefault
transfersYesArray of name + recipient pairs. Max 20 per batch. All names must be owned by fromAddress.
fromAddressYesCurrent owner wallet address (must sign the transaction). All names must be owned by this address.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses critical behavioral traits: automatic detection of wrapped vs unwrapped names (NameWrapper/ERC-1155 vs BaseRegistrar/ERC-721), side effects (resolver records NOT affected), safety warning (transfers FULL ownership), and validation constraints (fromAddress must own all names, valid Ethereum addresses, not expired).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear information hierarchy: opening statement, efficiency rationale, technical implementation details, usage patterns, requirements list, warning block, and follow-up recommendation. Every sentence provides distinct value; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex blockchain mutation tool with no output schema and no annotations, the description comprehensively covers: prerequisites (ownership, expiry), execution mechanics (Multicall3, wrapped detection), safety implications (full ownership transfer), side effects (resolver preservation), and post-operation workflow (bulk_set_records recommendation).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds validation constraints not explicit in schema (names must be registered/not expired, all addresses must be valid Ethereum addresses) and clarifies the array structure pattern ('specify a toAddress per name').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states 'Transfer multiple ENS names in a single transaction via Multicall3 — bulk send' with specific technical mechanism (Multicall3) and scope (up to 20 names). Distinguishes from sibling tool transfer_ens_name by emphasizing 'bulk' and 'much cheaper and faster than transferring names one at a time.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear efficiency-based guidance for when to use (bulk vs single transfers) and explicitly references sibling tool bulk_set_records as a follow-up action ('After transfer, consider using bulk_set_records'). Lacks explicit naming of transfer_ens_name as the single-name alternative, though strongly implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_availabilityAInspect

Check availability of one or more ENS names. Returns status (AVAILABLE, REGISTERED, EXPIRED, or INVALID), owner address, and expiry date for each name. Validates ENS character rules. Accepts names with or without .eth suffix.

ParametersJSON Schema
NameRequiredDescriptionDefault
namesYesArray of ENS names or labels to check (e.g. ["coffee", "tea.eth", "pixel"])
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and succeeds well: it discloses the four possible status return values (AVAILABLE, REGISTERED, EXPIRED, INVALID), additional return fields (owner address, expiry date), validation behavior (ENS character rules), and input normalization (accepts with/without .eth suffix). It effectively compensates for the missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, zero waste. Front-loaded with purpose ('Check availability...'), followed by return values, validation behavior, and input flexibility. Every sentence provides distinct, non-redundant information that aids agent decision-making.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter (fully documented), lack of annotations, and absence of output schema, the description is remarkably complete. It details return values and statuses that would normally appear in an output schema. Minor deduction for not mentioning read-only nature or rate limits, though 'Check' implies safe operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema description coverage, the description adds meaningful context beyond the schema's examples. Specifically, it clarifies that the tool 'Accepts names with or without .eth suffix' and validates character rules, providing semantic understanding not fully captured in the schema's JSON examples alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Check') and resource ('availability of one or more ENS names'), clearly distinguishing it from sibling tools like purchase_name, bulk_register, or transfer_ens_name. It precisely defines the scope (availability checking) versus management or acquisition operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description clearly explains what the tool does and returns, it lacks explicit guidance on when to use it versus alternatives (e.g., 'use this before purchase_name to verify availability'). Usage is implied by the function but not explicitly contextualized among the 30+ sibling ENS tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

extend_subname_expiryAInspect

Extend the expiry of an ENS subname in the NameWrapper.

Subnames have their own expiry that cannot exceed the parent name's expiry. This tool extends a subname's expiry.

Who can call this:

  • The parent name owner (always)

  • The subname owner (only if CAN_EXTEND_EXPIRY fuse is burned on the subname)

Use cases:

  • Extending subnames you've issued to users

  • Self-extending your own subname (if CAN_EXTEND_EXPIRY is set)

  • Keeping organizational subnames active

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesFull subname to extend (e.g. "sub.coffee.eth")
yearsNoNumber of years to extend from now (default: 1). Cannot exceed parent expiry.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses critical behavioral constraints: the parent expiry ceiling, NameWrapper context, and the CAN_EXTEND_EXPIRY fuse requirement. It implies state mutation through 'burned' fuse terminology, though it could explicitly state this is a write operation or mention reversibility.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (purpose, constraints, authorization, use cases). Slightly redundant with 'This tool extends a subname's expiry' restating the opening sentence, but overall efficient with no extraneous content. The bullet points earn their place by clarifying complex authorization logic.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of ENS fuses and NameWrapper logic, the description adequately covers authorization and domain constraints. However, with no output schema provided, it should ideally describe the return value (transaction hash, success boolean) or failure modes, which are absent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions ('Full subname to extend', 'Number of years to extend'). The description reinforces the parent expiry constraint but adds no additional parameter semantics (format examples, validation edge cases) beyond what the schema already provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise action ('Extend the expiry') and resource ('ENS subname in the NameWrapper'), distinguishing it from sibling tools like renew_ens_name (which handles parent names) and mint_subnames (which creates new ones). The specificity of 'NameWrapper' and 'subname' provides exact technical scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit authorization rules ('Who can call this') detailing parent owner vs subname owner permissions with fuse requirements, and lists concrete use cases. However, it lacks explicit comparison to siblings (e.g., when to use this versus renew_ens_name for parent names), preventing a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_alphaAInspect

Scan the ENS marketplace for alpha — names listed below their comparable-sales valuation. Returns ranked opportunities with discount percentage, estimated value range, confidence rating, and comparable data. The autonomous agent's edge: find mispriced names before anyone else. Pair with get_valuation for deep analysis, then purchase_name to execute.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return. Default 25, max 100.
charTypeNoFilter by character type
maxLengthNoMaximum label length (e.g. 5 for up to 5-letter names)
minLengthNoMinimum label length (e.g. 3 for 3-letter names and up)
maxPriceEthNoMaximum listing price in ETH (e.g. 1.0). Omit for no cap.
minConfidenceNoMinimum confidence for comparable data. HIGH = 20+ sales, MEDIUM = 10+, LOW = 3+.LOW
minDiscountPctNoMinimum discount vs estimated value. Default 20%. Range: 1-99.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses return data structure ('ranked opportunities with discount percentage, estimated value range...'), implies read-only behavior (contrasted with 'purchase_name to execute'), and notes the competitive aspect ('find mispriced names before anyone else'). Minor gap: no mention of rate limits, caching, or permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero waste: sentence 1 defines purpose, sentence 2 details return values, sentence 3 provides workflow integration. Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the description compensates by listing specific return fields (discount percentage, value range, confidence, comparables). For a 7-parameter discovery tool with no annotations, it adequately covers functionality, though explicit mention of output format (array vs object) would elevate this to 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. The description references 'comparable-sales valuation' and 'discount percentage' which provide semantic context for the minDiscountPct and minConfidence parameters, but does not explicitly explain parameter syntax or relationships beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Scan[s] the ENS marketplace for alpha' (specific verb + resource) and clarifies the specific scope as 'names listed below their comparable-sales valuation.' It effectively distinguishes from siblings by noting this is for discovery vs. get_valuation (analysis) and purchase_name (execution).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Pair with get_valuation for deep analysis, then purchase_name to execute.' This clearly delineates when to use this tool (opportunity discovery) versus the alternatives in the sibling list, establishing a complete use-case chain.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_reputationAInspect

Check if an ENS name or wallet is a registered AI agent. Returns ENSIP-25 agent-registration verification (text record bindings between ENS names and on-chain agent registries), AI metadata from text records, and ERC-8004 reputation data when available.

ENSIP-25 verification is live: reads agent-registration text records from the name's resolver to confirm the ENS name ↔ registry binding.

ERC-8004 reputation queries (scores, reviews, validations) use the live mainnet contracts deployed January 29, 2026.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameOrWalletYesENS name (e.g. "agent.eth") or wallet address (0x...) to look up
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and adequately discloses behavioral traits: it specifies the operation reads from the name's resolver and queries live mainnet contracts (deployed Jan 29, 2026). However, it does not explicitly classify the operation as read-only/safe or describe error handling when names are not found.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three distinct sentences, front-loaded with the primary purpose. Each sentence earns its place: the first defines the operation and return types, while the second and third provide essential technical context about the specific standards (ENSIP-25 and ERC-8004) and data sources.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the absence of an output schema and annotations, the description adequately compensates by detailing the specific return values (agent-registration verification, AI metadata, reputation data) and explaining the domain-specific standards involved (ENSIP-25, ERC-8004). This provides sufficient context for an agent to understand what data will be returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single 'nameOrWallet' parameter, establishing a baseline of 3. The description references 'ENS name or wallet' consistent with the schema but does not add additional semantic context, constraints, or format guidance beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the core action ('Check if... is a registered AI agent') and identifies the specific resources accessed (ENSIP-25 verification, ERC-8004 reputation data). It naturally distinguishes from siblings like 'search_agent_directory' (broad search vs. specific lookup), 'provision_agent_identity' (creation vs. verification), and 'get_name_details' (general ENS vs. AI-specific standards).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use the tool (to verify agent registration and reputation for a specific ENS name or wallet), but lacks explicit guidance on when to prefer this over 'search_agent_directory' or 'provision_agent_identity'. It does not state prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_caller_identityAInspect

Returns the authenticated identity of the calling agent. If you connected with ERC-8128 signed requests, this resolves your wallet address to your ENS name, agent metadata, and portfolio summary. Call this first to confirm your identity is recognized.

Requires ERC-8128 authentication (signed HTTP requests). See GET /mcp/auth for setup details.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It effectively discloses authentication requirements ('ERC-8128 signed requests') and return behavior ('resolves your wallet address to your ENS name...'). Minor gap: doesn't explicitly state idempotency or rate limiting, though 'Returns' implies read-only safety.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. Front-loaded with core purpose ('Returns...'), followed by conditional behavior details, ending with usage guideline and auth requirement. Every sentence adds unique value beyond the structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description must explain return values—it successfully does ('ENS name, agent metadata, and portfolio summary'). Covers critical auth context. Minor gap: doesn't describe error states (e.g., what happens if identity unrecognized).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters with 100% schema coverage warrants baseline 4 per rubric. Description correctly focuses on auth requirements and return semantics rather than inventing parameter documentation where none exists.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Returns the authenticated identity of the calling agent' uses clear verb+resource. Distinguishes from siblings like get_name_details (arbitrary names) and get_wallet_portfolio (just portfolio) by focusing on the caller's full identity resolution including ENS name, metadata, and portfolio.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance: 'Call this first to confirm your identity is recognized' establishes clear invocation order. Prerequisites are clearly stated ('Requires ERC-8128 authentication') with reference to setup alternative ('See GET /mcp/auth'). Conditional usage is specified ('If you connected with ERC-8128 signed requests...').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_activityAInspect

Get recent ENS marketplace activity — sales, new listings, offers, mints, transfers, renewals, and burns. Filter by event type. Returns event details including name, price (in ETH), buyer/seller addresses, and timestamp. Sorted by most recent first.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 25, max 100)
offsetNoPagination offset
eventTypesNoFilter by event type(s). Defaults to all types.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It successfully explains sorting behavior ('Sorted by most recent first') and compensates for the missing output schema by detailing return fields (name, price in ETH, addresses, timestamp). It does not disclose rate limits or error behaviors, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four efficient sentences with zero waste. The description is front-loaded with the core action, followed by filter capability, return payload details, and sorting behavior. Every clause conveys unique information beyond what structured fields provide.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 3-parameter schema and lack of output schema or annotations, the description adequately covers the critical gaps by explaining the return structure and sort order. It appropriately omits parameter syntax details (covered by schema) but could marginally improve by noting all parameters are optional.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% (baseline 3), the description adds value by mapping the abstract 'eventTypes' enum to concrete business concepts (sales, new listings, renewals, burns). It also implies the temporal nature of the query ('recent'), providing semantic context for the limit/offset pagination parameters not explicitly detailed in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') and clearly identifies the resource ('ENS marketplace activity'). It enumerates specific event types (sales, listings, mints, etc.) that clearly distinguish this tool from siblings like get_name_details or get_wallet_portfolio, which retrieve static data rather than transactional event streams.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the enumeration of marketplace events (sales, offers, burns), suggesting it's for monitoring market trends. However, it lacks explicit guidance on when to use this versus alternatives like get_name_details (for static lookups) or search_ens_names, and omits prerequisites or filtering recommendations beyond mentioning the capability exists.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_name_detailsAInspect

Get full details for a specific ENS name including owner, expiry, tags, active listings (with prices in ETH and USD), active offers, ENSIP-25 agent identity bindings (if any agent-registration text records are set), and ENSIP-26 agent context and endpoint discovery records.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name or label (e.g. "vitalik" or "vitalik.eth")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and comprehensively lists return values including owner, expiry, tags, active listings with dual-currency pricing, offers, and ENSIP-25/26 agent records. However, it omits explicit mention of read-only safety, authentication requirements, or rate limiting behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single dense sentence that efficiently substitutes for a missing output schema by listing seven distinct data categories. While lengthy, every clause earns its place by specifying unique return data; however, breaking into two sentences might improve scannability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and the tool's complexity (spanning ownership, market data, and agent identity standards), the description provides exceptional completeness by detailing the exact structure and content of the returned data, including specific ENSIP standard implementations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear examples ('vitalik' or 'vitalik.eth'), so the description appropriately does not redundantly explain the parameter. It meets the baseline expectation where the schema already provides complete semantic coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('full details for a specific ENS name') and clearly distinguishes from siblings like get_primary_name (address-to-name), check_availability (binary status), and get_valuation (price-only) by enumerating comprehensive data categories including ownership, expiry, market listings, and agent identity standards.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the specificity of returned fields (e.g., indicating this is for comprehensive lookups rather than simple availability checks), but provides no explicit guidance on when to use this versus search_ens_names, get_valuation, or get_primary_name, nor does it state prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_primary_nameAInspect

Check what primary ENS name is set for a wallet address (reverse resolution).

Returns the ENS name that this address resolves to, or null if no primary name is set.

This verifies both directions:

  • Reverse: address → name (the reverse record)

  • Forward: name → address (confirms the name actually points back to this wallet)

If either direction is missing, the primary name won't resolve. Use this to:

  • Verify a primary name was set correctly after set_primary_name

  • Check if a wallet has any primary name configured

  • Debug why a primary name isn't showing up (missing ETH address record)

ParametersJSON Schema
NameRequiredDescriptionDefault
walletAddressYesEthereum wallet address (0x...) to check reverse resolution for
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and excels by explaining the bidirectional validation logic ('verifies both directions'), the exact return values ('ENS name... or null'), and failure conditions ('If either direction is missing, the primary name won't resolve'). This critical ENS-specific behavior is not inferable from the schema alone.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence. While lengthy, the detailed explanation of bidirectional resolution is essential context for ENS operations. The bullet points for use cases are structured and scannable, though the text could be slightly tightened without losing meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description explicitly states what gets returned (the ENS name or null) and explains the validation logic required for successful resolution. For a single-parameter lookup tool, it provides comprehensive context covering success cases, failure modes, and debugging scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the walletAddress parameter already documented as 'Ethereum wallet address (0x...) to check reverse resolution for'. The description references 'wallet address' in the opening sentence but does not add significant semantic detail beyond what the schema provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Check') and clearly identifies the resource ('primary ENS name') and operation ('reverse resolution'). It distinguishes itself from sibling tools like set_primary_name (the write counterpart) and search_ens_names (general lookup) by specifying this is specifically for reverse address-to-name resolution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use this to:' followed by three concrete scenarios including post-operation verification ('after set_primary_name'), existence checks, and debugging. This creates clear workflow guidance relative to the sibling set_primary_name tool and indicates when the tool is appropriate versus when troubleshooting is needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_similar_namesAInspect

Find ENS names semantically similar to a given name using vector embeddings across 3.6M+ names. Returns similar names with similarity scores and live marketplace data (price, owner, expiry). Great for discovering related names for portfolio building or brand exploration.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name or label to find similar names for (e.g. "coffee", "pixel.eth")
limitNoMax results (default 20, max 50)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the algorithmic mechanism ('vector embeddings'), the corpus size ('3.6M+ names'), and crucially discloses the return values ('similarity scores and live marketplace data') since no output schema exists. It lacks rate limits or error-handling details, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three highly efficient sentences: the first defines the core operation, the second details the return payload, and the third specifies use cases. Every sentence earns its place, it is appropriately front-loaded with the action, and contains no redundant or wasteful language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description adequately compensates by explaining the semantic search mechanism and return data structure (marketplace data, scores). For a two-parameter tool with complete schema coverage, the description provides sufficient context for invocation, though it could be improved by mentioning edge cases (e.g., no matches found).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (both 'name' and 'limit' are documented). The description references a 'given name' which aligns with the required parameter, but does not add semantic meaning, format constraints, or examples beyond what the schema already provides. With high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Find[s] ENS names semantically similar to a given name using vector embeddings across 3.6M+ names,' providing a specific verb, resource, mechanism, and scope. The mention of 'semantically similar' and 'vector embeddings' effectively distinguishes this from sibling tools like search_ens_names (likely keyword-based) or check_availability (exact match).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context stating it is 'Great for discovering related names for portfolio building or brand exploration,' indicating when to use the tool. However, it lacks explicit guidance on when NOT to use it (e.g., when exact matching is needed) or explicit naming of alternative tools like search_ens_names for different use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_usage_statsAInspect

Get usage statistics for this MCP server session. Returns tool call counts, success rates, and average latency.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses return payload contents ('tool call counts, success rates, and average latency'), which substitutes for the missing output schema. Safety profile implied by 'Get' but not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. First states the action, second clarifies return values. Appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter observability tool, the description is complete. It compensates for the missing output schema by detailing the returned statistics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters, which per instructions establishes a baseline of 4. Description correctly implies no filtering parameters are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with resource 'usage statistics' and scope 'for this MCP server session', clearly distinguishing it from the sibling ENS management tools (register, transfer, renew, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context ('for this MCP server session') indicating this is for monitoring/debugging the current interaction, though it lacks explicit exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_valuationAInspect

Get a confidence-rated valuation for an ENS name based on comparable sales, entity recognition (Wikipedia/Wikidata), search interest, word frequency, and fame-scaled pricing. Returns estimated value range, background context on the name (person/place/brand/concept), comparable sales data, and a narrative explaining the valuation methodology. Essential for pricing decisions.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name or label to value (e.g. "coffee" or "coffee.eth")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and successfully discloses data sources (Wikipedia/Wikidata, search interest, comparable sales), return structure (value range, context, narrative), and methodology. Could explicitly state read-only nature, though 'Get' implies this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three compact components: methodology explanation, return value specification, and usage context. No redundant words. Critical information (valuation basis) front-loaded in first sentence. No waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description must explain return values—it comprehensively lists estimated range, background context, comparable sales, and narrative. Methodology transparency is strong. Minor gap: no mention of rate limits or confidence level interpretation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('ENS name or label to value'), establishing baseline 3. The description references 'ENS name' but does not add syntax details, format constraints, or examples beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a confidence-rated valuation'), resource ('ENS name'), and distinguishes from siblings by focusing on pricing analysis rather than registration, transfer, or management operations. The methodology details (comparable sales, entity recognition, etc.) further clarify scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear positive guidance ('Essential for pricing decisions') establishing when to use the tool. Lacks explicit negative guidance ('when not to use') or alternatives, but the tool's unique analytical function among operational siblings makes its purpose self-evident.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_wallet_portfolioAInspect

Get all ENS names owned by a wallet address. Returns each name with label, tags, expiry, registration date, and active listing/offer prices. Useful for portfolio analysis and wallet profiling.

If you are authenticated via ERC-8128 and omit the wallet parameter, your own wallet is used automatically.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 100, max 200)
offsetNoPagination offset
walletYesEthereum wallet address (0x...) or ENS name
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Compensates well by detailing return structure (label, tags, expiry, prices) and authentication fallback logic. Missing explicit safety/read-only declaration or rate limit disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences across two paragraphs with zero waste. Front-loaded with core action, followed by return values, use case, and authentication behavior. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description partially compensates by listing return fields. Covers authentication complexity adequately. Minor gap: lacks explicit statement that this is a safe read operation, which would be helpful given the many mutation siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds critical semantic detail that wallet can be omitted under ERC-8128 authentication, overriding the schema's 'required' constraint. Does not add syntax details beyond schema for limit/offset.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Get') + resource ('ENS names') + scope ('owned by a wallet address'), and distinguishes from siblings like get_name_details (single name) or search_ens_names (search) by emphasizing portfolio-level retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context ('Useful for portfolio analysis and wallet profiling') and explains authentication-based parameter omission behavior. Lacks explicit when-not-to-use guidance or named sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

make_offerAInspect

Submit an offer (bid) on a registered ENS name. Validates the offer, provides market context (listing price, comparable sales, existing offers), and directs you to execute on the Vision marketplace.

Requires wallet signature for on-chain execution via Seaport protocol. The name owner can accept, counter, or decline on ensvision.com.

Tip: Use get_valuation first to understand fair market value before making an offer.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to make an offer on (e.g. "coffee.eth")
currencyNoPayment currency (default: WETH)WETH
amountEthYesOffer amount in ETH
expiryHoursNoOffer expiry in hours (default: 72)
walletAddressYesYour wallet address (offer maker)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It effectively explains validation occurs, market context is returned (comparable sales, existing offers), execution requires Seaport protocol signature, and the offer lifecycle ends with owner action. Lacks only cancellation policy or gas cost disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with action (sentence 1), execution requirements (sentence 2), outcome (sentence 3), and prerequisite tip (sentence 4). Zero redundancy; every sentence conveys distinct operational information. Front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a blockchain transaction tool with 5 parameters and no output schema, the description adequately covers the workflow, marketplace context, prerequisite tooling, and post-submission owner actions. Minor gap in not describing the return value or error states, but comprehensive given the constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema fully documents all 5 parameters (name, amountEth, currency, expiryHours, walletAddress). The description references 'offer' and 'wallet signature' which loosely map to amountEth and walletAddress, but adds no syntax, format, or semantic details beyond the schema definitions. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb ('Submit') and resource ('offer/bid on a registered ENS name'), clearly distinguishing from direct purchases (purchase_name) and new registrations. Explicitly scopes to 'registered' names and 'Vision marketplace', differentiating it from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisite recommendation ('Use get_valuation first'), states requirement for wallet signature, and outlines the owner response workflow (accept/counter/decline). Clearly signals when to use this tool versus valuation or direct purchase alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

manage_ens_nameAInspect

Get comprehensive management info for an ENS name.

Returns:

  • Registration status (active, expiring soon, grace period, premium auction, expired, available)

  • Exact expiry date and days remaining

  • Whether the name is wrapped (NameWrapper) or unwrapped (BaseRegistrar)

  • Current owner address

  • Renewal pricing (on-chain)

  • Recommended actions based on current status

Use this before performing any management action (renew, transfer, set records) to understand the current state of a name.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to check (e.g. "coffee.eth")
includeRenewalPricingNoInclude on-chain renewal pricing (default: true)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Documents return values comprehensively (status types, wrapped state, recommendations). However, fails to disclose critical blockchain-specific behaviors: whether this is a read-only call (no gas), if it requires authentication, or rate limiting, which agents need to know for safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear information hierarchy: one-line purpose, detailed Returns bullet list, and usage guideline sentence. Every sentence earns its place with zero redundancy. Front-loaded with the core action statement.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description compensates effectively by enumerating all return fields (status, expiry, ownership, recommendations). Missing only the safety/transactional context that annotations would typically provide (read-only hint), but otherwise complete for an information-retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% ('name' and 'includeRenewalPricing' are both fully documented in the schema). The description prose does not add parameter syntax details, format constraints, or examples beyond what the schema provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get') and resource ('ENS name') with clear scope ('comprehensive management info'). Effectively distinguishes from siblings like 'check_availability' or 'get_name_details' by framing around management workflows, though it doesn't explicitly name the sibling to contrast against.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use this before performing any management action (renew, transfer, set records) to understand the current state.' Provides concrete examples of management actions. Lacks explicit 'when-not-to-use' or named alternatives for non-management lookups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

manage_fusesAInspect

Manage fuses on a wrapped ENS name. Fuses are permission bits that can be permanently burned to restrict what can be done with a name.

Three modes:

  1. read — Check which fuses are currently burned on a name

  2. burn_owner_fuses — Burn fuses on a name you own (CANNOT_UNWRAP must be burned first)

  3. burn_child_fuses — As a parent, burn fuses on a subname (e.g. burn PARENT_CANNOT_CONTROL on sub.parent.eth)

Owner-controlled fuses:

  • CANNOT_UNWRAP — prevents unwrapping (MUST be burned first before any other fuse)

  • CANNOT_BURN_FUSES — prevents burning additional fuses

  • CANNOT_TRANSFER — prevents transfers

  • CANNOT_SET_RESOLVER — prevents resolver changes

  • CANNOT_SET_TTL — prevents TTL changes

  • CANNOT_CREATE_SUBDOMAIN — prevents creating new subnames

  • CANNOT_APPROVE — prevents approving operators

Parent-controlled fuses (for subnames):

  • PARENT_CANNOT_CONTROL — parent permanently gives up control over the subname

  • CAN_EXTEND_EXPIRY — allows the subname owner to extend their own expiry

WARNING: All fuse burning is IRREVERSIBLE. Fuses expire when the name expires.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to manage fuses on (e.g. "coffee.eth" for owner fuses, or "sub.coffee.eth" for child fuses)
fusesNoFuses to burn. Required for burn actions, optional for read. Owner fuses: CANNOT_UNWRAP, CANNOT_BURN_FUSES, CANNOT_TRANSFER, CANNOT_SET_RESOLVER, CANNOT_SET_TTL, CANNOT_CREATE_SUBDOMAIN, CANNOT_APPROVE. Parent-controlled fuses (for subnames): PARENT_CANNOT_CONTROL, CAN_EXTEND_EXPIRY.
actionNoAction: "burn_owner_fuses" burns fuses on a name you own, "burn_child_fuses" burns fuses on a subname you are parent of, "read" reads current fusesread
expiryNoFor burn_child_fuses only: Unix timestamp for subname expiry (cannot exceed parent expiry)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and excels by revealing: (1) irreversibility of all fuse burning, (2) fuses expire when the name expires, (3) prerequisite dependencies (CANNOT_UNWRAP first), and (4) expiry constraints for child fuses (cannot exceed parent expiry). It accurately portrays the destructive, permanent nature of the operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear visual hierarchy (bold headers, bullet lists) and front-loaded with the core concept. While lengthy, the complexity of ENS fuses and irreversible consequences justify the detail. Each section serves a distinct purpose: conceptual introduction, operational modes, valid fuse values, and critical warnings.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity of irreversible blockchain permissions and lack of annotations, the description provides comprehensive coverage: it explains the domain concept (fuses as permission bits), enumerates all valid fuse values with semantics, specifies safety constraints, and includes prominent warnings about irreversibility. No output schema exists, but per guidelines, the description need not explain return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage providing baseline documentation, the description adds crucial semantic context beyond the schema: detailed explanations of what each fuse constant does (e.g., 'prevents unwrapping', 'prevents transfers'), the hierarchical relationship between fuses (CANNOT_UNWRAP as prerequisite), and domain-specific constraints that help the agent understand valid parameter combinations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the tool's purpose using specific verbs (manage/burn/read) and resources (fuses on wrapped ENS names). It effectively distinguishes this from sibling tools like manage_ens_name or wrap_name by focusing specifically on 'fuses' as permission bits, and details the three distinct operational modes (read, burn_owner_fuses, burn_child_fuses).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use each mode (read vs. burn operations) and critical prerequisites ('CANNOT_UNWRAP must be burned first before any other fuse'). It clearly differentiates between owner-controlled fuses (for names you own) and parent-controlled fuses (for subnames you parent), establishing clear contextual boundaries for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mint_subnamesAInspect

Bulk create subnames under a parent ENS name. Designed for agent fleet deployment — create identities like agent001.company.eth, agent002.company.eth, etc. Each subname can have its own owner and records (addresses, text records). Uses the ENS NameWrapper for subname creation.

Returns complete transaction recipes (contract address, encoded calldata, gas estimates) for each subname. Your wallet signs and broadcasts the transactions. Subnames are free to create — only gas costs apply.

ParametersJSON Schema
NameRequiredDescriptionDefault
subnamesYesArray of subnames to create
parentNameYesParent ENS name (e.g. "company.eth")
walletAddressYesWallet address that owns the parent name
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and excels: it specifies the return format ('complete transaction recipes'), execution model (wallet signing required), cost structure ('free to create — only gas costs'), and technical implementation ('Uses the ENS NameWrapper').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured paragraphs with zero redundancy: first covers purpose and capabilities, second covers return values and costs. Every sentence conveys unique operational information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of output schema and annotations, the description fully compensates by detailing the transaction recipe return format, gas considerations, and NameWrapper mechanics necessary for successful invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% (baseline 3), the description adds significant semantic value by illustrating the parent-subname relationship with concrete examples ('agent001.company.eth') and clarifying ownership inheritance ('defaults to parent owner' concept).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action ('Bulk create subnames') and resource ('parent ENS name'), with specific examples ('agent001.company.eth') that distinguish it from sibling tools like bulk_register. The scope is precisely defined as subname creation under an existing parent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual guidance through 'Designed for agent fleet deployment' and explains the transaction flow ('Your wallet signs and broadcasts'). However, it does not explicitly name alternatives (e.g., bulk_register for new domains vs subnames) or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

provision_agent_identityAInspect

Provision a complete ENS identity for an AI agent in one call. Describe your agent's purpose and budget — get back recommended available names, a multi-step registration recipe (commit → register → set agent records), and ready-to-sign transaction data.

The registration recipe includes agent identity records (ENSIP-25: ai.agent, ai.purpose, ai.capabilities + ENSIP-26: agent-context) as a mandatory final step — these records make the agent discoverable in the ENS Agents directory. ENSIP-25 agent-registration binding is included when an agentRegistry is provided.

IMPORTANT: The calling AI MUST execute all steps in the recipe, including the final set_ens_records step. Agent identity is not complete until the text records are on-chain.

Example: "DeFi trading bot, 0.05 ETH budget" → names like swiftswap.eth with full registration + identity setup instructions.

ParametersJSON Schema
NameRequiredDescriptionDefault
purposeYesWhat the agent does (e.g. "DeFi trading bot", "data analytics agent", "social media manager", "infrastructure monitor")
budgetEthNoMaximum budget in ETH for name acquisition (default: 0.1)
preferencesNoOptional naming preferences
agentRegistryNoENSIP-25 agent registry binding. If provided, the provisioning flow will include the agent-registration text record that links this ENS name to your on-chain agent identity.
walletAddressYesAgent wallet address (will own the name and sign transactions)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and discloses key behavioral traits: it returns 'ready-to-sign transaction data' (not executing transactions itself), mandates the final `set_ens_records` step for completeness, and details the multi-step process flow (commit → register → set records). Minor gap: it doesn't clarify whether name recommendations are reserved/held or just checked for availability at call time.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with front-loaded value proposition ('in one call'), followed by technical specifics (ENSIP standards), critical warnings (MUST execute all steps), and a concrete example. Every sentence serves a distinct purpose with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (nested objects, multi-step ENS workflow) and lack of output schema, the description comprehensively explains the return values (recommended names, recipe, transaction data) and completion criteria. It adequately compensates for missing annotations by detailing the stateful nature of the provisioning process.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value through a concrete example ('DeFi trading bot, 0.05 ETH budget') demonstrating `purpose` and `budgetEth` usage, and clarifies the conditional behavior of `agentRegistry` regarding ENSIP-25 binding, adding semantic context beyond the schema's structural description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Provision[s] a complete ENS identity for an AI agent' — specific verb (provision), resource (ENS identity), and scope (AI agent-specific). It effectively distinguishes from siblings like `set_ens_records` or `bulk_register` by emphasizing it returns a 'multi-step registration recipe' rather than executing a single step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides strong imperative guidance ('The calling AI MUST execute all steps') and clarifies when ENSIP-25 binding is included (when `agentRegistry` provided). However, it could more explicitly distinguish when to use this high-level orchestrator versus individual sibling tools like `check_availability` or `manage_ens_name`.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

purchase_nameAInspect

Purchase an ENS name — either buy a listed name from a marketplace or register an available name directly on-chain.

For AVAILABLE names: Returns a complete registration recipe with contract address, ABI, step-by-step instructions, and a pre-generated secret. Your wallet signs and submits the transactions (commit → wait 60s → register).

For LISTED names: Searches all marketplaces (OpenSea, Grails) for the best price. If there are MULTIPLE active listings, returns CHOOSE_LISTING status with all options — present these to the user and ask which one they want. When the user chooses, call this tool again with the chosen orderHash to get the buy transaction.

The tool auto-detects whether the name is available or listed. You can override with the 'action' parameter.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to purchase (e.g. "coffee.eth")
actionNoAction: buy_listing (purchase listed name), register (register available name), auto (detect automatically)auto
orderHashNoSpecific Seaport order hash to fulfill. Use this when the user has chosen a specific listing from multiple options.
maxPriceEthNoMaximum price willing to pay in ETH (for listed names)
durationYearsNoRegistration duration in years (default: 1)
walletAddressYesBuyer wallet address (will own the name)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden and succeeds comprehensively. It discloses the multi-step transaction flow (commit → wait → register), specific return structures (registration recipe with ABI and pre-generated secret), marketplace search behavior (OpenSea/Grails aggregation), price comparison logic, and the conditional CHOOSE_LISTING state requiring iterative interaction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three dense paragraphs with zero redundancy: first establishes scope and modes, second details the registration flow with technical specifics (60s wait, secret generation), third covers marketplace purchasing with decision branching. Information is front-loaded and logically sequenced; every sentence conveys critical behavioral or procedural information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 6 parameters, multiple execution branches, stateful interaction patterns (CHOOSE_LISTING), and no output schema, the description is remarkably complete. It explains return values (registration recipe vs. listing options), error/edge case handling (multiple listings), timing expectations, and the exact protocol for resuming the operation with orderHash.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% so baseline is 3. The description adds meaningful workflow context beyond the schema: it explains the 'action' parameter's role in overriding auto-detection, integrates 'orderHash' into the two-step purchase flow for multiple listings, and clarifies that walletAddress establishes ownership. This narrative integration of parameters into the usage workflow adds significant value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Purchase') and resource ('ENS name'), clearly distinguishing two distinct modes: buying listed names from marketplaces versus registering available names on-chain. It effectively differentiates from siblings like check_availability (read-only), renew_ens_name (lifecycle extension), and make_offer (negotiation rather than immediate purchase).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance for both operational paths: for available names it details the commit-wait-register sequence with 60s timing, and for listed names it explains the CHOOSE_LISTING status handling requiring user selection followed by a second tool invocation with orderHash. It also clarifies the auto-detection default and the manual override capability using the 'action' parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reclaim_nameAInspect

Reclaim ENS Registry ownership of a .eth name.

This syncs the ENS Registry owner to match the BaseRegistrar token owner. Used when:

  • A name was transferred via direct safeTransferFrom (bypassed ENS routing)

  • ENS Registry ownership is out of sync with token ownership

  • Recovery after a contract migration or edge case

The caller must own the BaseRegistrar ERC-721 token for the name. After reclaiming, you may also need to set the resolver if it was cleared.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to reclaim (e.g. "coffee.eth")
ownerYesAddress to set as the ENS Registry owner (must own the BaseRegistrar token)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of explaining behavioral traits. It successfully discloses the authentication requirement (ERC-721 token ownership), the mechanism (syncing registry to token owner), and side effects (potential resolver clearance). However, it omits standard blockchain operational details like gas costs or transaction irreversibility.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description follows an effective inverted pyramid structure, opening with the core purpose, followed by mechanism explanation, a scannable bulleted list of specific use cases, and concluding with prerequisites. Every sentence contributes unique value, with zero redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of ENS's two-layer ownership model and the absence of annotations or output schema, the description adequately covers the business logic, edge cases, prerequisites, and post-operation considerations. It explains the relationship between the Registry and BaseRegistrar sufficiently for an agent to understand when reclamation is necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting both the name format and the owner address pattern with constraints. The description reinforces the ownership constraint mentioned in the schema but does not add additional semantic meaning, syntax examples, or format details beyond what the schema already provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise action ('Reclaim ENS Registry ownership') targeting a specific resource ('.eth name'). It clearly distinguishes this tool from siblings like `transfer_ens_name` by explaining the specific mechanics of syncing the ENS Registry owner to the BaseRegistrar token owner, clarifying this is a recovery/sync operation rather than a standard transfer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit usage scenarios under 'Used when:' listing three specific edge cases (direct safeTransferFrom bypass, out of sync ownership, contract migration). It also states the prerequisite that 'The caller must own the BaseRegistrar ERC-721 token' and notes the follow-up action of setting the resolver, providing clear boundaries for when the tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

renew_ens_nameAInspect

Renew an ENS name or batch of names. Returns the transaction data needed to extend registration.

Unlike registration, renewal is simple — just one transaction with payment. No commit/reveal needed.

Accepts ANY duration — days, weeks, months, years. There is no minimum renewal period on ENS. Examples: 7 days (1 week), 28 days (1 month), 365 days (1 year).

Anyone can renew any name (you don't need to be the owner). This is useful for:

  • Extending your own names before expiry

  • Gifting renewal to a friend's name

  • Protecting valuable names from expiring

Returns exact on-chain pricing from the ETHRegistrarController with a 5% buffer (excess is refunded).

For batch renewals (multiple names), all names are bundled into a SINGLE Multicall3 transaction.

ParametersJSON Schema
NameRequiredDescriptionDefault
namesYesENS name or array of names to renew (e.g. "coffee.eth" or ["coffee.eth", "wallet.eth"])
yearsNoDEPRECATED — use duration instead. Number of years (converted to days internally).
durationNoDuration in days to extend registration (e.g. 7 for 1 week, 28 for a month, 365 for a year). Default: 365. Any positive number is valid — there is NO minimum.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden and discloses: permissionless execution, payment requirements, exact pricing source with 5% buffer/refund mechanism, and Multicall3 batching. Minor gap: no mention of failure modes (e.g., expired names) or gas estimation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information is logically chunked: core purpose → process contrast → duration semantics → permissions/pricing → batch mechanics. Every sentence advances understanding; no redundancy despite comprehensive coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by specifying return content ('transaction data', 'exact on-chain pricing'). Covers batch vs single behavior and economic mechanics. Minor gap: could specify transaction data format (calldata vs hash).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3). Description adds valuable semantic context: concrete duration examples (7/28/365 days), unit flexibility clarification ('Accepts ANY duration'), and emphasizes the deprecation of 'years' parameter relative to 'duration'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb+resource ('Renew an ENS name or batch of names') and immediately distinguishes from sibling registration tools via 'Unlike registration... No commit/reveal needed.' Clearly scopes functionality to existing names only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (extending before expiry, gifting, protecting) and critical permission context ('Anyone can renew any name'). Contrasts process complexity with registration and notes batch bundling behavior, giving clear selection criteria vs alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_agent_directoryAInspect

Search the AI agent directory — find registered agents by name, capability, protocol support, or reputation. Powered by the live ERC-8004 registry via 8004scan (110,000+ agents indexed across 50+ chains).

Returns agent identity, owner wallet/ENS, reputation scores, supported protocols (MCP/A2A/OASF), verification status, and links to 8004scan profiles.

Examples:

  • "trading agents on Base" → search for trading agents filtered to Base chain

  • "MCP agents" → find agents that support the Model Context Protocol

  • "high reputation agents" → set minReputation to find top-scored agents

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoFilter by chain name (e.g. "Ethereum Mainnet", "Base", "Solana Mainnet")
limitNoMax results (default 25, max 50)
queryNoSearch query — agent name, capability, or description
capabilitiesNoFilter by supported protocols (e.g. ["MCP", "A2A", "OASF"])
minReputationNoMinimum total score (0-100)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the data source (ERC-8004 registry via 8004scan), scale (110,000+ agents), and detailed return values (identity, ENS, reputation scores, protocols) compensating for the missing output schema. Lacks rate limit or caching details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose upfront, followed by data source context, return value disclosure, and actionable examples. No redundant sentences. The length is appropriate for a 5-parameter tool lacking an output schema, with high information density throughout.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Thoroughly complete given constraints. Compensates for missing output schema by detailing return fields (identity, wallet/ENS, reputation, verification status). Compensates for missing annotations by describing the live registry data source and indexing scope. Schema coverage is full, so no parameter gaps exist.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value through the examples section, which semantically maps natural language intent ('trading agents on Base') to specific parameter usage (chain filter), helping agents understand query construction beyond the raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a clear verb ('Search') and specific resource ('AI agent directory'), listing search dimensions (name, capability, protocol, reputation). It distinguishes itself from sibling management tools (approve_operator, provision_agent_identity) by emphasizing the read-only registry search function via ERC-8004/8004scan.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides three concrete examples mapping natural language queries to specific parameter combinations (chain for Base, capabilities for MCP, minReputation for scores). However, it does not explicitly name sibling alternatives like get_agent_reputation for direct reputation lookups vs. directory searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_ens_namesAInspect

Search ENS names using natural language. Supports all query types:

  • Filtered search: "4-letter words under 0.1 ETH"

  • Concept search: "ocean themed names" (semantic similarity across 3.6M names)

  • Creative search: "names for a coffee brand" (AI-generated suggestions)

  • Collection search: "crypto terms expiring soon"

  • Activity: "what sold recently?"

  • Availability check: "is coffee.eth taken?"

  • Bulk check: "check apple.eth, banana.eth, cherry.eth" Returns structured results with name, price, owner, tags, and availability info.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language search query (e.g. "cheap 3-letter words", "ocean themed names", "is coffee.eth taken?")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the return structure ('structured results with name, price, owner, tags, and availability info') since no output schema exists, and discloses key behavioral traits like 'semantic similarity across 3.6M names' and 'AI-generated suggestions.' It lacks explicit read-only or rate limit disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with the core purpose, followed by well-organized bullet points that efficiently communicate distinct capabilities. Every sentence earns its place; the bulleted examples are information-dense and the final sentence clarifies return values. Slightly verbose but justified by the richness of examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by detailing what the tool returns. For a single-parameter tool, the description provides sufficient complexity coverage by explaining both input patterns and output structure. No critical gaps remain for an agent to invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage for the single 'query' parameter, establishing a baseline of 3. The description adds significant semantic value beyond the schema by categorizing the types of natural language queries supported (filtered, concept, creative, collection, activity, availability, bulk), helping the agent understand what kinds of inputs produce good results.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb+resource ('Search ENS names') and immediately distinguishes this tool from siblings by emphasizing the 'natural language' interface. The bullet points further differentiate it from structured alternatives like check_availability or get_similar_names by showing diverse query capabilities (concept search, creative search, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context through seven specific query type examples with natural language patterns (e.g., '4-letter words under 0.1 ETH', 'is coffee.eth taken?'). While it doesn't explicitly name sibling alternatives or state 'when not to use,' the comprehensive examples effectively guide the agent on when to select this tool for natural language queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_knowledgeAInspect

Search the ENS knowledge base — governance proposals, protocol documentation, developer insights, blog posts, forum discussions, and Farcaster casts from key ENS figures (Vitalik, Nick Johnson, etc.). Covers ENS governance and DAO proposals, protocol details (ENSv2, resolvers, subnames), community sentiment, historical decisions, and what specific people have said about a topic. Powered by semantic search over curated ENS sources.

Do NOT use this for name valuations, market data, or availability checks — use the other tools for those.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default 6)
queryYesSearch query — what you want to know about ENS governance, protocol, ecosystem, or history
sourceNoFilter to a specific source. Omit to search all sources.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses 'Powered by semantic search' (revealing the matching algorithm) and 'curated ENS sources' (indicating scope limitations). It could improve by mentioning result format or pagination behavior, but the core behavioral traits are covered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense sentences followed by a clear negative constraint. Every clause adds value—either expanding scope (governance, protocol details) or constraining usage. No filler words or redundant restatements of the tool name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter search tool with 100% schema coverage but no output schema, the description adequately covers the knowledge domain and search methodology. It implies return values through the 'limit' parameter reference to 'results', though explicitly stating the return structure (documents/snippets) would make it a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage (baseline 3), the description adds significant semantic context by listing concrete query examples like 'Vitalik', 'Nick Johnson', 'ENSv2', 'resolvers', and 'subnames'. This helps the agent formulate valid queries beyond the generic schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Search') and clearly defines the resource ('ENS knowledge base'), then enumerates specific content types (governance proposals, Farcaster casts, etc.). It effectively distinguishes from siblings by explicitly contrasting with 'name valuations, market data, or availability checks' which other tools handle.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit negative constraints: 'Do NOT use this for name valuations, market data, or availability checks — use the other tools for those.' This directly guides the agent toward alternatives (get_valuation, check_availability, get_market_activity) for those specific use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_ens_recordsAInspect

Set ENS resolver records for a name you own. Returns encoded transaction calldata ready to sign and broadcast.

Supports address records (ETH, BTC, SOL, etc.), text records (avatar, description, url, social handles, AI agent metadata), content hash (IPFS/IPNS), ENSIP-25 agent-registration records, and ENSIP-26 agent context and endpoint discovery.

Multiple records are batched into a single multicall transaction to save gas.

Common text record keys: avatar, description, url, email, com.twitter, com.github, com.discord, ai.agent, ai.purpose, ai.capabilities, ai.category.

ENSIP-25 support: Pass agentRegistration with registryAddress and agentId to automatically set the standardized agent-registration text record. This creates a verifiable on-chain binding between your ENS name and your agent identity in an ERC-8004 registry.

ENSIP-26 support: Pass agentContext to set the agent-context text record (free-form agent description). Pass agentEndpoints with protocol URLs (mcp, a2a, oasf, web) to set agent-endpoint[protocol] discovery records.

The returned transaction can be signed and submitted directly using any wallet framework (Coinbase AgentKit, ethers.js, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to set records for (e.g. "myagent.eth")
recordsYesRecords to set on the name
walletAddressYesWallet address that owns the name (must sign the transaction)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden and excels. It explicitly states the tool returns 'encoded transaction calldata ready to sign and broadcast' rather than executing on-chain, discloses the multicall batching for gas optimization, and notes compatibility with specific wallet frameworks (Coinbase AgentKit, ethers.js).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is lengthy but appropriately structured for the complexity: core purpose first, followed by supported record types, common keys, ENSIP standard details, and finally transaction handling. Every paragraph serves a distinct purpose, though the ENSIP sections could be slightly more compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high complexity (nested objects, blockchain context), zero annotations, and no output schema, the description comprehensively covers all gaps. It explains the return value format (calldata), prerequisites (ownership), gas optimization strategy (multicall), and integration patterns (wallet frameworks), leaving no critical behavioral ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds significant value by enumerating common text record keys (avatar, com.twitter, ai.agent), explaining ENSIP-25/26 conceptual mappings ('creates a verifiable on-chain binding'), and providing concrete examples of address formats (ETH, BTC) and content hash types (IPFS/IPNS).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb and resource ('Set ENS resolver records for a name you own'), clearly distinguishing this from siblings like bulk_set_records (implied by singular 'name'), set_resolver (changes resolver vs records), and set_primary_name (reverse record). It explicitly scopes the operation to owned names only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It establishes clear prerequisites (must own the name) and scope (single name, multiple records batched via multicall). While it doesn't explicitly name siblings as alternatives, the singular 'name' parameter and multicall explanation implicitly guide users toward bulk_set_records for multiple names. It clearly states this returns calldata for signing rather than executing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_primary_nameAInspect

Set the primary ENS name (reverse resolution) for a wallet address.

This controls what name is displayed when someone looks up your Ethereum address. For example, instead of seeing "0x1234...abcd", they'd see "myname.eth".

Requirements:

  • You must own or control the ENS name

  • The name's ETH address record must point to your wallet

  • Only the wallet owner can set their own primary name

If the ETH address record doesn't match, use set_ens_records first to update it.

Only one primary name per address — setting a new one replaces the previous.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to set as primary (e.g. "myname.eth")
walletAddressYesWallet address to set the primary name for (must sign the transaction)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Given no annotations, the description carries the burden well by disclosing the replacement behavior ('setting a new one replaces the previous'), authorization requirements, and the concept of reverse resolution. Could explicitly mention transaction/gas implications, but the signing requirement in schema context helps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded, followed by helpful example, bulleted requirements, conditional guidance, and behavioral note. Every sentence provides necessary information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a mutation tool with no output schema or annotations. Covers prerequisites, side effects, sibling differentiation, and operational constraints. Slightly short of perfect only because return values (transaction hash, etc.) aren't described, though not strictly required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both parameters fully documented in the schema (including examples and signing context). The description provides conceptual context for the parameters but doesn't need to add semantic details beyond the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Set the primary ENS name'), the resource type (reverse resolution), and distinguishes from sibling tools by explicitly mentioning when to use 'set_ens_records' instead.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent guidance with explicit prerequisites (ownership, address record matching, wallet ownership) and clear alternative tool reference ('use set_ens_records first') when preconditions aren't met.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_resolverAInspect

Change the resolver contract for an ENS name.

The resolver is where a name's records live (ETH address, text records, content hash, etc.). Changing the resolver points the name at a different contract.

Common use cases:

  • Migrating to the latest ENS Public Resolver

  • Pointing to a custom resolver (e.g. for off-chain/CCIP-read resolution)

  • Fixing a name that has no resolver set

Pass "public" as the resolver address to use the ENS Public Resolver (0x231b0Ee14048e9dCcD1d247744d114a4EB5E8E63).

WARNING: Records on the old resolver won't be visible after switching. Set up records on the new resolver first, or use the ENS Public Resolver which most names already use.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to change resolver for (e.g. "coffee.eth")
resolverYesNew resolver contract address. Use "public" for the ENS Public Resolver.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and excels: it explains the resolver concept ('where records live'), provides the concrete public resolver address (0x231b0Ee14048e9dCcD1d247744d114a4EB5E8E63), discloses the destructive side effect (old records become invisible), and gives mitigation guidance (set up records first).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence earns its place: definition, concept explanation, use case bullets, concrete parameter example, and risk warning. The structure is logically sequenced with the most critical information (purpose) front-loaded and risks clearly demarcated.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a state-changing operation with data-loss risk and no output schema, the description is comprehensive. It covers domain context (what resolvers are), operational risks, specific parameter values, and migration patterns. No gaps remain given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% (baseline 3), the description adds significant value by providing the actual hex address for the 'public' shortcut and explaining the conceptual role of the resolver parameter ('points the name at a different contract'). However, it doesn't add format validation rules or edge case handling that would justify a 5.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb and resource ('Change the resolver contract for an ENS name') and immediately distinguishes this from record-setting operations by explaining 'The resolver is where a name's records live.' This clarifies that unlike sibling tools like set_ens_records, this tool changes the contract pointer rather than the data itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Common use cases' with three specific scenarios (migrating to latest resolver, custom/CCIP-read resolution, fixing unset resolvers). The WARNING section implicitly defines when NOT to use casually ('Records on the old resolver won't be visible'), guiding the agent to prepare records first or use the public resolver option.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transfer_ens_nameAInspect

Transfer ownership of an ENS name to another wallet address.

Automatically detects whether the name is wrapped (NameWrapper/ERC-1155) or unwrapped (BaseRegistrar/ERC-721) and builds the correct transaction.

Requirements:

  • The fromAddress must currently own the name

  • Both addresses must be valid Ethereum addresses

  • The name must be registered (not expired)

WARNING: This transfers FULL ownership. The recipient gains complete control including the ability to transfer, set records, or let the name expire.

Resolver records (avatar, addresses, etc.) are NOT affected by transfer — they stay on the name.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to transfer (e.g. "coffee.eth")
toAddressYesRecipient wallet address
fromAddressYesCurrent owner wallet address (must sign the transaction)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses critical behavioral details: auto-detection of wrapper state, transaction building, irreversibility ('FULL ownership'), and preservation of resolver records. Could mention gas costs or return value for completeness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with front-loaded purpose statement, followed by technical implementation details, bulleted requirements, safety warning, and side-effect clarification. Every sentence serves a distinct purpose with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage for a blockchain transaction tool despite lack of annotations and output schema. Covers prerequisites, safety warnings, and side effects (records preservation). Minor gap: does not describe return value or confirmation mechanism.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage establishing baseline 3. Description adds valuable semantic context: fromAddress 'must sign the transaction' (indicating blockchain authorization requirement), and clarifies the ownership validation logic beyond parameter types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb ('Transfer ownership') and resource ('ENS name'), clearly distinguishing from siblings like set_ens_records or wrap_name. The auto-detection of wrapped/unwrapped states further clarifies scope beyond the basic title.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear Requirements section listing prerequisites (ownership, valid addresses, registration status). Implicitly distinguishes from bulk_transfer_ens_names by singular naming and from wrap/unwrap tools via auto-detection note, though it doesn't explicitly name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unwrap_nameAInspect

Unwrap a .eth name from the ENS NameWrapper back to BaseRegistrar.

This converts the name from an ERC-1155 token back to an ERC-721 token. All fuses are cleared upon unwrapping.

Will fail if the CANNOT_UNWRAP fuse has been burned — that restriction is permanent.

Use cases:

  • Reverting a wrapped name to standard ERC-721 for compatibility

  • Regaining full control after wrapping without burning CANNOT_UNWRAP

  • Moving a name to a platform that only supports ERC-721

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to unwrap (e.g. "coffee.eth")
ownerYesAddress of the current wrapped name owner
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and excels: it discloses the token standard conversion (ERC-1155 to ERC-721), the destructive side effect (all fuses cleared), and the irreversible blocking condition (CANNOT_UNWRAP fuse). This gives the agent complete context on state changes and risks.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with the core action, followed by technical details, side effects, failure modes, and use cases. Every sentence serves a distinct purpose; no redundancy. The bulleted use cases provide scannable context without verbose prose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a mutation tool with no output schema. Covers the transformation mechanics, fuse behavior, prerequisites for failure, and practical use cases. Minor gap: does not mention return value structure, though this is acceptable given no output schema exists to document.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions ('ENS name to unwrap', 'Address of the current wrapped name owner'), establishing baseline 3. The description mentions 'owner' in the failure context but does not add syntax details, format constraints, or semantic clarifications beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Unwrap') and identifies the exact resource transformation (from ENS NameWrapper to BaseRegistrar). It clearly distinguishes this from the sibling tool 'wrap_name' by specifying the direction (back to BaseRegistrar) and token standard conversion (ERC-1155 to ERC-721).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit failure conditions ('Will fail if the CANNOT_UNWRAP fuse has been burned — that restriction is permanent') and three specific use cases that clarify when to use this versus alternatives. The use cases implicitly contrast with 'wrap_name' by emphasizing 'reverting' and 'regaining full control'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

wash_checkAInspect

Check if an ENS sale is a wash trade. Provide either a tx_hash to look up a pre-computed score, or provide label + buyer + seller + price_eth for live on-demand analysis. Returns a wash confidence score (0-1), a label (clean/suspicious/likely_wash), detected signals, and a human-readable summary.

ParametersJSON Schema
NameRequiredDescriptionDefault
buyerNoBuyer wallet address for live analysis
labelNoENS label (e.g. "defirm") for live analysis
sellerNoSeller wallet address for live analysis
tx_hashNoTransaction hash to look up pre-computed wash score
price_ethNoSale price in ETH for live analysis
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses output format in detail (confidence score 0-1, categorical labels, signals, summary). Implies read-only safety via 'check' and 'look up' verbs, though doesn't explicitly confirm no side effects or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences total. First states purpose; second covers input modes and return values. Every clause adds value beyond structured data. Zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description appropriately documents return values (score range, enum labels, signal details). For a 5-parameter analysis tool, covers essential ground, though could note any authentication requirements or rate limits for live analysis.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3), but description adds critical semantic grouping explaining the XOR logic: either tx_hash alone OR the four live-analysis params together. This compensates for schema's lack of required field constraints or conditional logic.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Check') and domain ('ENS sale wash trade'), clearly distinguishing from sibling registration/transfer tools like purchase_name or transfer_ens_name. Zero ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explains the dual invocation modes (tx_hash for cached vs label+buyer+seller+price_eth for live analysis) and their required parameter groupings. Lacks explicit 'when not to use' guidance, though no siblings offer similar fraud-detection functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

wrap_nameAInspect

Wrap an unwrapped .eth name into the ENS NameWrapper contract.

Wrapping converts the name from an ERC-721 token (BaseRegistrar) to an ERC-1155 token (NameWrapper). This enables:

  • Fuse permissions (restrict what can be done with the name)

  • Protected subnames (subnames with guaranteed permissions)

  • ERC-1155 compatibility for marketplaces and protocols

Returns a two-step transaction recipe: approve + wrap.

Available fuses (all IRREVERSIBLE once burned):

  • CANNOT_UNWRAP — prevents unwrapping back to BaseRegistrar

  • CANNOT_BURN_FUSES — prevents burning additional fuses

  • CANNOT_TRANSFER — prevents transfers

  • CANNOT_SET_RESOLVER — prevents resolver changes

  • CANNOT_SET_TTL — prevents TTL changes

  • CANNOT_CREATE_SUBDOMAIN — prevents new subnames

  • CANNOT_APPROVE — prevents approving operators

CANNOT_UNWRAP must be burned before any other fuses can be burned.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to wrap (e.g. "coffee.eth")
fusesNoOptional fuses to burn on wrap (e.g. ["CANNOT_UNWRAP"]). WARNING: irreversible.
ownerYesAddress of the current name owner (must own the BaseRegistrar token)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses critical behavioral traits: returns a 'two-step transaction recipe' (approve + wrap), emphasizes IRREVERSIBLE nature of fuses, and explains the dependency constraint (CANNOT_UNWRAP must be burned first). Lacks explicit error conditions (e.g., what happens if name already wrapped).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose, followed by technical context, benefits, return format, and exhaustive fuse reference. The fuse list is lengthy but justified by the irreversible nature of the operation. No wasted sentences, though density is high.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a complex blockchain operation with no output schema. Compensates by describing the return structure (two-step recipe), explains domain-specific concepts (fuses, token standards), and provides safety-critical warnings about irreversibility necessary for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), description adds substantial value by enumerating all 7 valid fuse values with detailed behavioral explanations (e.g., 'prevents resolver changes'), which is critical for an irreversible operation. Also adds domain context about BaseRegistrar ownership.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb+resource ('Wrap an unwrapped .eth name into the ENS NameWrapper contract'), clearly distinguishing from siblings like unwrap_name and manage_fuses by specifying the exact contract and token standards involved (ERC-721 to ERC-1155 conversion).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicitly defines when to use by specifying 'unwrapped .eth name' as input and explaining the transformation benefits. Warns about irreversible consequences of fuse burning. Could be improved by explicitly naming unwrap_name as the inverse operation or stating 'do not use if already wrapped'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources