Skip to main content
Glama

Server Details

Deploy ERC-20 tokens on Ethereum, Base, BNB, Polygon via MCP. One call = deployed contract.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: creating token intents, deploying on testnet, confirming deployments, checking status, simulating, getting gas prices, listing tokens/templates, and creating API keys. No two tools overlap in functionality.

Naming Consistency5/5

All tools follow a consistent 'ava_verb_noun' pattern using snake_case, making the tool set predictable and easy to navigate (e.g., ava_create_token_intent, ava_confirm_deployment).

Tool Count5/5

With 9 tools, the set is well-scoped for token deployment and management. It covers all necessary steps without being bloated or insufficient.

Completeness4/5

The tool surface covers the full lifecycle of token deployment: creation, deployment, confirmation, status tracking, simulation, gas estimation, and listing. Missing perhaps a cancellation or update mechanism, but these are not essential for the primary use case.

Available Tools

9 tools
ava_confirm_deploymentAInspect

After signing and broadcasting the transaction returned by ava_create_token_intent, submit the txHash here to resolve the deployed contract address. The server monitors the chain for the transaction receipt and updates the intent status. Returns: status ('deploying' | 'deployed' | 'failed'), contractAddress when confirmed, explorerUrl, and tokenUrl. If status is still 'deploying', poll ava_get_deployment_status every 5-10 seconds until resolved. Possible failures: tx reverted (insufficient fee or gas), wrong chain, txHash already used.

ParametersJSON Schema
NameRequiredDescriptionDefault
apiKeyYesYour Ava Genesis API key (ava_live_...) used when the intent was created
txHashYes0x-prefixed 32-byte transaction hash from your signed and broadcast transaction
intentIdYesThe intentId returned by ava_create_token_intent
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral aspects. It discloses that the tool monitors on-chain and returns the contract address, but does not mention potential failures, timeout behaviors, or authentication details beyond submitting the txHash. This is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that are front-loaded: the first tells exactly when to use the tool, and the second explains what it does. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema and moderate complexity, the description covers the essential flow: it is a confirmation step, it monitors on-chain, and it returns the address. However, it does not explain error handling, timeouts, or what 'resolves the intent' means, which could be useful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has three parameters, but only txHash has a description. The tool description only adds context for txHash ('submit the txHash here'), while apiKey and intentId are left without additional explanation. Given low schema coverage (33%), the description does not compensate sufficiently.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: after signing/broadcasting the createToken() transaction, submit the txHash to monitor on-chain, resolve the intent, and return the deployed contract address. It uses specific verb and resource, and distinguishes from siblings like ava_deploy_token and ava_get_deployment_status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly specifies when to use: 'After signing and broadcasting the createToken() transaction from ava_create_token_intent'. It implies the sequence but does not explicitly state when not to use or mention alternatives, which would elevate clarity further.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_create_api_keyBInspect

Create a new Ava Genesis API key. Use this once to obtain your ava_live_... key before calling any other tool. The raw key is returned only once in the response — store it securely, it cannot be retrieved again. Each key tracks its own deployment history and rate limits independently. This tool requires no existing API key — it is the bootstrap step for new agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoHuman-readable label for this key, e.g. 'Treasury Agent v1'. Helps identify the key in your dashboard.
monthlyLimitNoMaximum number of mainnet deployments allowed per calendar month (default: 100). Does not limit Sepolia testnet deployments.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the key is returned only once, which is important behavioral info. However, it omits other behaviors like whether existing keys are affected, rate limits, or auth requirements. Since no annotations exist, the description should cover more.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two short sentences with zero redundancy. Every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool lacks an output schema, and the description does not clarify return format or error conditions. It also omits context like authentication requirements or key limits. Given the tool's simplicity, it is still incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both parameters described in schema). The description does not add extra meaning beyond the schema; it merely confirms creation. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Create a new API key,' specifying the verb (create) and resource (API key). This purpose is distinct from siblings like ava_create_token_intent or ava_deploy_token, which involve tokens, not API keys.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a single usage hint ('store it securely') but no explicit guidance on when to use this tool vs alternatives, nor any conditions or prerequisites. It lacks context for appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_create_token_intentAInspect

Deploy an ERC-20 token on mainnet or testnet using your agent's own wallet. Returns encoded calldata (to, value, data) — your agent signs and broadcasts the transaction, paying gas + $10 fee directly from its wallet. Same contract and fee flow as human users on the website. Your agent owns the deployed contract from the moment of deploy. Works on Ethereum, Base, BNB Chain, Polygon, and Sepolia testnet. After broadcasting the tx, call ava_confirm_deployment with the txHash to resolve the contract address. Use ava_simulate_token first to validate config and estimate fees without spending gas.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesToken name, e.g. 'Agent Treasury Token'
chainYesTarget chain. Base and BNB Chain have the lowest gas fees and are recommended for most agent deployments.
apiKeyYesYour Ava Genesis API key (ava_live_...)
supplyYesInitial token supply as a number string, e.g. '1000000'. Avoid scientific notation.
symbolYesToken ticker symbol, e.g. 'ATT' (max 16 chars, uppercase recommended)
decimalsNoToken decimal places (default: 18)
featuresNoOptional feature flags. Premium features (mintable, pausable, blacklist, transferTax, antiWhale) trigger $50 tier. Burnable alone is $20 tier.
templateNoOptional preset that pre-fills feature flags. Use ava_list_templates to see each preset's config.
callbackUrlNoOptional HTTPS webhook URL to receive status updates (deploying, deployed, failed). Signed with HMAC-SHA256.
idempotencyKeyNoOptional unique string to prevent duplicate deployments on retry. Same key returns the original intent instead of creating a new one.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description transparently explains the tool returns encoded calldata (not a direct deployment), mentions gas and $10 fee, and specifies supported chains. It discloses the partial workflow and required follow-up step.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (~80 words) and front-loaded with the main purpose. Every sentence adds value, though a more structured format could improve readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 10 parameters (some nested) and no output schema, the description lacks detail on parameter meanings and return format. It explains the overall flow but fails to provide sufficient guidance for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is very low (10%); only 'chain' is described. The description does not explain the meaning of most parameters (e.g., features, template, decimals, callbackUrl, idempotencyKey), leaving agents with limited guidance beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool deploys an ERC-20 token using your own wallet and returns encoded calldata for the agent to sign and broadcast. It distinguishes from sibling ava_deploy_token by specifying that this tool only prepares the transaction, not completes it.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context on when to use (for deploying tokens with agent signing) and explicitly instructs to call ava_confirm_deployment afterward. However, it does not explicitly exclude alternatives or mention when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_deploy_tokenAInspect

Deploy an ERC-20 token on Sepolia testnet with no wallet — TESTNET ONLY. The platform signs on your behalf. Use this for integration testing only. For mainnet deployments use ava_create_token_intent — your agent signs with its own wallet, pays gas + $10 fee directly, same as human users. Nothing on mainnet is free.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesToken name, e.g. 'My Protocol Token'
chainYesOnly 'sepolia' is allowed. Mainnet requires ava_create_token_intent.
apiKeyYesYour Ava Genesis API key (ava_live_...)
supplyYesInitial supply as a number string, e.g. '1000000'
symbolYesToken symbol, e.g. 'MPT' (max 16 chars, uppercase)
decimalsNoDecimals (default: 18)
featuresNoOptional feature flags (override template)
templateNoOptional preset configuration
callbackUrlNoWebhook URL for status events (optional)
ownerAddressNoWallet address that will receive the tokens and own the contract. If omitted, tokens remain with the platform deployer.
idempotencyKeyNoUnique key to prevent duplicate deploys on retry (optional)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description carries full burden. Discloses that platform signs on behalf and that mainnet is not free. Could mention return behavior or async nature, but adequately covers key behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences. First sentence delivers core purpose immediately. No fluff, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 11 parameters and no output schema, description covers testnet constraint, platform signing, and mainnet alternative. Missing return value description but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description does not add additional parameter details beyond schema, but schema already provides good descriptions. No need to repeat.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Deploy), resource (ERC-20 token), environment (Sepolia testnet), and unique aspect (no wallet, platform signs). It distinguishes from sibling tool ava_create_token_intent for mainnet.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (testnet integration testing) and when not to use (mainnet). Provides alternative tool name (ava_create_token_intent) and explains cost differences.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_get_deployment_statusBInspect

Poll the current status of a token deployment by its intentId. Use this after ava_deploy_token times out, or to check progress of an ava_create_token_intent flow. Returns: status ('deploying' | 'deployed' | 'failed'), contractAddress and explorer links when deployed, errorMessage on failure. Poll every 5-10 seconds. Most deployments complete within 60 seconds. Possible errors: insufficient fee sent, gas spike, RPC timeout — check errorMessage field.

ParametersJSON Schema
NameRequiredDescriptionDefault
apiKeyYesYour Ava Genesis API key (ava_live_...) used when the deployment was created
intentIdYesThe intentId returned by ava_deploy_token, ava_create_token_intent, or ava_confirm_deployment
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It correctly implies a read-only, idempotent operation through the verb 'Check'. However, it does not disclose potential side effects, authentication requirements (though apiKey is a parameter), rate limits, or error scenarios. The description provides basic transparency but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded with the primary action and output. Every word is functional; there is no redundancy or fluff. It efficiently conveys the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description lists the returned items (status, contract address, explorer links) but does not specify their format, possible statuses, or behavior when a deployment is not found or still pending. Sibling tools like ava_confirm_deployment suggest a workflow, but no sequencing guidance is given. The description is adequate but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning the description adds no explanation for the two parameters (apiKey and intentId). It mentions 'intentId' in passing but does not define what it is, how to obtain it, or its format. The apiKey is not discussed at all. The description fails to compensate for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Check the status'), the resource ('a deployment'), and the method of identification ('by intentId'). It also lists the returned information: status, contract address, and explorer links. This distinguishes it from sibling tools like ava_deploy_token (deployment) and ava_confirm_deployment (confirmation).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., a deployment must exist), when not to use it, or contrasts with sibling tools like ava_get_gas_prices. The user receives no context for appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_get_gas_pricesAInspect

Get current gas prices and deployment cost estimates across all chains. Use to pick the cheapest chain before deploying. No auth required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Only states no auth required. Does not mention idempotency, data freshness, or side effects. For a read operation, more transparency is needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two clear sentences, front-loaded with purpose, no wasted words. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes purpose and usage, but lacks description of output format or example. Without output schema, more detail would help an agent interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters. With schema coverage at 100% and no parameters, baseline is 4. Description adds no param info but none needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Get' and resource 'gas prices and deployment cost estimates across all chains'. Distinguishes from siblings which involve deployment, API keys, etc. No ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use: before deploying to pick cheapest chain. Mentions no auth required. Does not mention when not to use, but not necessary given unique functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_list_my_tokensBInspect

List all ERC-20 tokens deployed via your API key, newest first. Use to audit past deployments, find a contract address, or check deployment history. Supports filtering by status and chain. Returns up to 50 results by default (max 200). Does not return tokens deployed by other API keys even on the same account.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoFilter by chain. Omit to return tokens across all chains.
limitNoNumber of results to return (default: 50, max: 200)
apiKeyYesYour Ava Genesis API key (ava_live_...)
statusNoFilter by deployment status. Omit to return all statuses.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It states 'tokens deployed via your API key' but does not clarify what 'deployed via' means, nor mention that this is a read-only operation (though implied). It also fails to disclose pagination behavior or the meaning of return values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, using a single sentence. It is front-loaded with the main action. However, it may be too brief at the expense of necessary details like pagination or output structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (4 parameters, no output schema, no annotations), the description is insufficient. It does not explain expected output format, error conditions, or how the apiKey parameter ties into authentication. The tool lacks completeness for an agent to use it effectively without further context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 25% (only limit has a description). The description adds little beyond the schema: it mentions 'optional status and chain filters' but does not explain the apiKey parameter or provide context for limit's default behavior. With low coverage, the description should compensate but falls short.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'List all tokens deployed via your API key' with optional filters, using a specific verb and resource. It distinguishes from sibling tools like ava_deploy_token or ava_create_token_intent, which are creation/confirmation operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'optional status and chain filters', giving some context, but does not explicitly state when to use this tool versus alternatives like ava_get_deployment_status or ava_list_templates. No clear when-to-use or when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_list_templatesAInspect

List available token templates (utility, governance, reward, treasury, community, meme). Each returns pre-configured feature flags.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It implies a read-only operation by using 'list' and mentions returning data, but does not explicitly state non-destructiveness, authentication needs, or other behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient 12-word sentence. Every part adds value: verb, resource, examples, and return information. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, no output schema), the description adequately explains the purpose and output. However, it could mention that templates are used for token creation, linking to sibling tools, for enhanced completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With no parameters, the description's mention of 'returns pre-configured feature flags' adds value beyond the empty schema. It explains what the tool produces, which is adequate for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and resource 'available token templates', with specific categories listed in parentheses. It differentiates from sibling tools like ava_create_token_intent or ava_deploy_token, which are for creation and deployment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like ava_list_my_tokens or ava_simulate_token. The description simply states what it does without context on prerequisites or typical workflow position.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_simulate_tokenAInspect

Validate a token configuration and get a fee estimate without spending gas or deploying anything. Use this before ava_deploy_token or ava_create_token_intent to confirm the config is valid and see the exact ETH cost. Returns: estimated fee in ETH and USD, resolved feature flags, tier (Starter/Basic/Premium), and any validation errors. Does not create an intent or charge any fee.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesToken name, e.g. 'My Protocol Token'
chainYesTarget chain for fee estimate. Base and BNB Chain have lowest gas fees.
apiKeyYesYour Ava Genesis API key (ava_live_...)
supplyYesInitial token supply as a number string, e.g. '1000000'. Avoid scientific notation.
symbolYesToken ticker symbol, e.g. 'MPT' (max 16 chars, uppercase recommended)
decimalsNoToken decimal places (default: 18). Use 6 for stablecoin-like tokens, 18 for standard ERC-20.
featuresNoOptional feature overrides. Any premium feature (mintable, pausable, blacklist, transferTax, antiWhale) triggers the $50 tier. Burnable alone is $20 Basic tier.
templateNoOptional preset that pre-fills feature flags. Use ava_list_templates to see each preset's config.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description must fully disclose behavior. It states 'without deploying' but fails to clarify if it requires authentication, whether it creates any state, or if it incurs costs. The description does not explain the simulation's side effects or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no superfluous words. The first sentence states the core action, and the second provides usage guidance. Every element is valuable and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complex tool with 8 parameters, nested objects, and no output schema. Description covers purpose and relation to a sibling but lacks behavioral details, parameter clarifications, or expected output format. The low annotation coverage exacerbates the incompleteness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is very low (13%: only 'decimals' has a default value). The description adds no parameter context, leaving agents to guess meanings for apiKey, chain, supply, symbol, features, template. With many required parameters undocumented, this is insufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool validates a token config and gets a fee estimate without deploying. The verb 'validate' and 'get' with specific resource 'token config' and 'fee estimate' make the purpose precise. It effectively distinguishes itself from ava_deploy_token as a pre-deployment sanity check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to use before calling ava_deploy_token, providing a clear context. However, it does not discuss when not to use or mention other alternatives like ava_create_token_intent or ava_confirm_deployment.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources