Skip to main content
Glama

Server Details

Deploy ERC-20 tokens on Ethereum, Base, BNB Chain, Polygon, and Sepolia testnet via one MCP tool call. Returns a deployed contract address. $10 flat fee per mainnet deployment — same as human users. Free on Sepolia testnet. Requires an API key from https://avagenesis.com/api/agents/keys

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 9 of 9 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clear, distinct purpose. Although ava_create_token_intent and ava_deploy_token both deal with token creation, they are explicitly separated by network (mainnet vs testnet) and signing method, with detailed descriptions preventing confusion.

Naming Consistency5/5

All tools follow a consistent 'ava_<verb>_<noun>' pattern in snake_case, such as ava_create_token_intent, ava_get_gas_prices, and ava_list_my_tokens. No mixing of conventions.

Tool Count5/5

With 9 tools, the server covers the essential operations for token deployment (planning, simulating, deploying, confirming, monitoring) without unnecessary bloat. Each tool serves a necessary step in the workflow.

Completeness5/5

The tool set covers the full deployment lifecycle: API key management, template selection, simulation, gas estimation, mainnet and testnet deployment, confirmation, status checking, and listing. No obvious gaps for the stated purpose of deploying ERC-20 tokens.

Available Tools

9 tools
ava_confirm_deploymentAInspect

After signing and broadcasting the transaction returned by ava_create_token_intent, submit the txHash here to resolve the deployed contract address. The server monitors the chain for the transaction receipt and updates the intent status. Returns: status ('deploying' | 'deployed' | 'failed'), contractAddress when confirmed, explorerUrl, and tokenUrl. If status is still 'deploying', poll ava_get_deployment_status every 5-10 seconds until resolved. Possible failures: tx reverted (insufficient fee or gas), wrong chain, txHash already used.

ParametersJSON Schema
NameRequiredDescriptionDefault
apiKeyYesYour Ava Genesis API key (ava_live_...) used when the intent was created
txHashYes0x-prefixed 32-byte transaction hash from your signed and broadcast transaction
intentIdYesThe intentId returned by ava_create_token_intent
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description discloses that the server monitors on-chain and returns contract address. Does not mention failure modes, but conveys asynchronous nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is efficient and front-loaded with key actions. Could be slightly more structured but wastes no words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes input and outcome clearly. Missing output schema details and error handling, but adequate given sibling tools and complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is only 33% (only txHash has description). Tool description adds context for txHash (from previous step) but not for apiKey or intentId. Partially compensates for low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states explicitly to submit txHash after createToken() transaction, with clear verb 'confirm' and resource 'deployment'. Distinguishes from siblings by specifying prerequisite step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool (after ava_create_token_intent) and what it does (monitors and returns address). Provides clear context, though no alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_create_api_keyAInspect

Create a new Ava Genesis API key. Use this once to obtain your ava_live_... key before calling any other tool. The raw key is returned only once in the response — store it securely, it cannot be retrieved again. Each key tracks its own deployment history and rate limits independently. This tool requires no existing API key — it is the bootstrap step for new agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoHuman-readable label for this key, e.g. 'Treasury Agent v1'. Helps identify the key in your dashboard.
monthlyLimitNoMaximum number of mainnet deployments allowed per calendar month (default: 100). Does not limit Sepolia testnet deployments.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses one critical behavioral trait: the raw key is returned only once and must be stored. No annotations exist, so this is helpful. However, lacks details on authentication requirements, side effects, or limits beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Front-loaded with the core action. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 2 optional params and no output schema, the description is adequate but missing some context: no mention of response format (e.g., includes key ID?), error handling, or idempotency. Could be improved.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions cover 100% of parameters ('Label for the key', 'Max deployments/month'). Description adds no extra meaning beyond the schema, meeting baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Create a new API key' with a specific verb and resource. The additional warning about storage adds value. Sibling tools are all different (deployments, tokens), so no confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Does not mention prerequisites, permissions, or typical use cases. The agent must infer when to create an API key.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_create_token_intentAInspect

Deploy an ERC-20 token on mainnet or testnet using your agent's own wallet. Returns encoded calldata (to, value, data) — your agent signs and broadcasts the transaction, paying gas + $10 fee directly from its wallet. Same contract and fee flow as human users on the website. Your agent owns the deployed contract from the moment of deploy. Works on Ethereum, Base, BNB Chain, Polygon, and Sepolia testnet. After broadcasting the tx, call ava_confirm_deployment with the txHash to resolve the contract address. Use ava_simulate_token first to validate config and estimate fees without spending gas.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesToken name, e.g. 'Agent Treasury Token'
chainYesTarget chain. Base and BNB Chain have the lowest gas fees and are recommended for most agent deployments.
apiKeyYesYour Ava Genesis API key (ava_live_...)
supplyYesInitial token supply as a number string, e.g. '1000000'. Avoid scientific notation.
symbolYesToken ticker symbol, e.g. 'ATT' (max 16 chars, uppercase recommended)
decimalsNoToken decimal places (default: 18)
featuresNoOptional feature flags. Premium features (mintable, pausable, blacklist, transferTax, antiWhale) trigger $50 tier. Burnable alone is $20 tier.
templateNoOptional preset that pre-fills feature flags. Use ava_list_templates to see each preset's config.
callbackUrlNoOptional HTTPS webhook URL to receive status updates (deploying, deployed, failed). Signed with HMAC-SHA256.
idempotencyKeyNoOptional unique string to prevent duplicate deployments on retry. Same key returns the original intent instead of creating a new one.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses return of encoded calldata, need for agent to sign and broadcast, $10 fee, and required follow-up call. Does not cover error handling or idempotency, but overall transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise, front-loaded with main purpose, and each sentence adds value. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 10 parameters and no output schema or annotations, description covers the essential workflow, chains, and fee. Could include more on optional parameters or output, but sufficient for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description lists key parameters (name, symbol, supply) but does not explain apiKey, decimals, features, template, etc. Schema coverage is low (10%), and description adds minimal detail beyond the token fields. Adequate but incomplete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool deploys an ERC-20 token using the user's wallet, specifies supported chains, and distinguishes from sibling tools like ava_confirm_deployment by outlining the two-step flow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explains when to use (to deploy token) and what to do after (call ava_confirm_deployment), mentions supported chains and fee. Lacks explicit 'when not to use' but provides sufficient context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_deploy_tokenAInspect

Deploy an ERC-20 token on Sepolia testnet with no wallet — TESTNET ONLY. The platform signs on your behalf. Use this for integration testing only. For mainnet deployments use ava_create_token_intent — your agent signs with its own wallet, pays gas + $10 fee directly, same as human users. Nothing on mainnet is free.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesToken name, e.g. 'My Protocol Token'
chainYesOnly 'sepolia' is allowed. Mainnet requires ava_create_token_intent.
apiKeyYesYour Ava Genesis API key (ava_live_...)
supplyYesInitial supply as a number string, e.g. '1000000'
symbolYesToken symbol, e.g. 'MPT' (max 16 chars, uppercase)
decimalsNoDecimals (default: 18)
featuresNoOptional feature flags (override template)
templateNoOptional preset configuration
callbackUrlNoWebhook URL for status events (optional)
ownerAddressNoWallet address that will receive the tokens and own the contract. If omitted, tokens remain with the platform deployer.
idempotencyKeyNoUnique key to prevent duplicate deploys on retry (optional)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full behavioral disclosure burden. It reveals key traits: platform signs on behalf (no wallet needed), testnet only, and that the tool is free (no gas for user) unlike mainnet. However, it does not mention potential delays, error handling, or idempotency behavior, which are minor gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at two sentences, yet it conveys core purpose, environment, usage restriction, and alternative tool. Every sentence adds value with no wasted words, and the critical information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately explains the tool's purpose and usage context, but it omits what the agent should expect in return (no output schema) and does not clarify response format or confirmation of deployment. Given the complexity (11 params, nested objects), a brief note on return values or status would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are already documented. The description adds minimal additional meaning beyond reinforcing that chain is sepolia-only and that parameters like features, template, etc. are optional overrides. This meets the baseline but does not significantly enhance understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the action (deploy), resource (ERC-20 token), and environment (Sepolia testnet). It explicitly distinguishes itself from the mainnet sibling ava_create_token_intent, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this for integration testing only' and directs to ava_create_token_intent for mainnet. It also explains the difference in signing and fees, giving clear when-to-use and when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_get_deployment_statusBInspect

Poll the current status of a token deployment by its intentId. Use this after ava_deploy_token times out, or to check progress of an ava_create_token_intent flow. Returns: status ('deploying' | 'deployed' | 'failed'), contractAddress and explorer links when deployed, errorMessage on failure. Poll every 5-10 seconds. Most deployments complete within 60 seconds. Possible errors: insufficient fee sent, gas spike, RPC timeout — check errorMessage field.

ParametersJSON Schema
NameRequiredDescriptionDefault
apiKeyYesYour Ava Genesis API key (ava_live_...) used when the deployment was created
intentIdYesThe intentId returned by ava_deploy_token, ava_create_token_intent, or ava_confirm_deployment
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. States it returns specific fields but does not mention read-only nature, potential failures, or side effects. Adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with purpose, no unnecessary words. Efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description partially compensates by listing return fields. Missing error handling, prerequisites (e.g., deployment must exist), and auth details beyond apiKey parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (no parameter descriptions) and the description adds no meaning to 'apiKey' or 'intentId'. Leaves the agent guessing about required format or source.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action ('Check the status of a deployment') and the resource ('by intentId'), and mentions return fields (status, contract address, explorer links). Distinguishes from siblings like ava_deploy_token.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings like ava_confirm_deployment or ava_deploy_token. Lacks context like 'use after deploying' or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_get_gas_pricesAInspect

Get current gas prices and deployment cost estimates across all chains. Use to pick the cheapest chain before deploying. No auth required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the transparency burden. It states 'No auth required,' which is a key behavioral trait. It does not mention rate limits or data freshness, but for a read-only cost check, it is largely sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short, direct sentences contain all necessary information with no extraneous words. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description clearly states the return values (gas prices and deployment cost estimates). The tool is simple (no parameters), and the description covers what an agent needs: purpose, when to use, and output nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so schema coverage is 100%. The description adds value by explaining the output (gas prices and deployment cost estimates), which is beyond what the empty schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('current gas prices and deployment cost estimates across all chains'), clearly distinguishing from siblings like ava_deploy_token or ava_confirm_deployment. It also states the usage goal ('pick the cheapest chain before deploying').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises using the tool before deployment to select the cheapest chain. However, it does not mention alternatives or when not to use it, though no direct alternatives exist among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_list_my_tokensAInspect

List all ERC-20 tokens deployed via your API key, newest first. Use to audit past deployments, find a contract address, or check deployment history. Supports filtering by status and chain. Returns up to 50 results by default (max 200). Does not return tokens deployed by other API keys even on the same account.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoFilter by chain. Omit to return tokens across all chains.
limitNoNumber of results to return (default: 50, max: 200)
apiKeyYesYour Ava Genesis API key (ava_live_...)
statusNoFilter by deployment status. Omit to return all statuses.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states the tool lists tokens, neglecting disclosures about side effects, rate limits, authentication details (beyond apiKey), pagination, or ordering. This is insufficient for a mutation-free read tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 16 words, front-loading the core purpose and optional filters. Every word is necessary, with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no output schema, and no annotations, the description minimally covers the purpose and filters but omits return format, pagination, error handling, and relationship to sibling tools. It is adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 25% schema description coverage (only 'limit' has a default), the description adds value by mentioning 'status' and 'chain' filters, which map to the enum parameters. However, it does not explain the 'apiKey' parameter or the meaning of 'limit' beyond its default, leaving gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and the resource 'tokens deployed via your API key', with optional filters. It distinguishes from sibling tools like ava_deploy_token and ava_get_deployment_status by focusing on listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for listing tokens with filters, but provides no explicit guidance on when to use this tool versus alternatives (e.g., ava_get_deployment_status for specific token status) or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_list_templatesAInspect

List available token templates (utility, governance, reward, treasury, community, meme). Each returns pre-configured feature flags.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds 'Each returns pre-configured feature flags' beyond just listing. No annotations provided, but description covers behavioral aspect adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no waste. Front-loaded with purpose and followed by clarifying detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a simple list tool: states what it does and what each item returns. Lacks only output format details, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters to document; schema coverage is 100%. Description adds nothing beyond schema, which is fine.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'List available token templates' with a specific verb and resource, and lists categories (utility, governance, etc.). Differentiates from sibling tools like creation/deployment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not, but implied by the function. Could be improved by mentioning it's useful before token creation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ava_simulate_tokenAInspect

Validate a token configuration and get a fee estimate without spending gas or deploying anything. Use this before ava_deploy_token or ava_create_token_intent to confirm the config is valid and see the exact ETH cost. Returns: estimated fee in ETH and USD, resolved feature flags, tier (Starter/Basic/Premium), and any validation errors. Does not create an intent or charge any fee.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesToken name, e.g. 'My Protocol Token'
chainYesTarget chain for fee estimate. Base and BNB Chain have lowest gas fees.
apiKeyYesYour Ava Genesis API key (ava_live_...)
supplyYesInitial token supply as a number string, e.g. '1000000'. Avoid scientific notation.
symbolYesToken ticker symbol, e.g. 'MPT' (max 16 chars, uppercase recommended)
decimalsNoToken decimal places (default: 18). Use 6 for stablecoin-like tokens, 18 for standard ERC-20.
featuresNoOptional feature overrides. Any premium feature (mintable, pausable, blacklist, transferTax, antiWhale) triggers the $50 tier. Burnable alone is $20 Basic tier.
templateNoOptional preset that pre-fills feature flags. Use ava_list_templates to see each preset's config.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden of behavioral disclosure. It states the tool does not deploy, implying no state change, but lacks details on authentication, rate limits, or what exactly is validated. It provides moderate transparency but is insufficient given the absence of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences: the first states the core function, the second provides usage guidance. It is front-loaded with the most important information and contains no extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the parameter count of 8, required 5, no output schema, and nested objects, the description is too brief. It omits details about the fee estimate output, validation results, and parameter roles (e.g., features, template). For a tool of moderate complexity, this is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 13% (only 'decimals' has a description), and the description adds no parameter-specific information beyond the schema. It refers to 'token config' broadly but does not explain the meaning or usage of key parameters like features, template, or chain, failing to compensate for the low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool validates a token config and gets a fee estimate without deploying. It distinguishes itself from the sibling ava_deploy_token by specifying 'without deploying', making its purpose unique and clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool: 'Use to sanity-check before calling ava_deploy_token.' It provides a clear context and suggests an alternative (ava_deploy_token) for deployment, though it does not explicitly state when not to use it beyond that.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources