Skip to main content
Glama

validate-aso

Validate ASO data against App Store and Google Play field limits, checking length restrictions, duplicate keywords, and invalid characters to ensure optimization compliance.

Instructions

Validates ASO data against App Store / Google Play field limits and rules.

IMPORTANT: Use 'search-app' tool first to resolve the exact slug.

WHAT IT VALIDATES

  1. Field Length Limits (docs/aso/ASO_FIELD_LIMITS.md):

    • App Store: name ≤30, subtitle ≤30, keywords ≤100, description ≤4000

    • Google Play: title ≤50, shortDescription ≤80, fullDescription ≤4000

  2. Keyword Duplicates (App Store only):

    • Checks for duplicate keywords in comma-separated list

  3. Invalid Characters:

    • Control characters, BOM, zero-width/invisible characters, variation selectors

WHEN TO USE

  • After running improve-public Stage 1/2 to verify optimization results

  • Before running public-to-aso to ensure data is valid

  • Anytime you want to check ASO data validity

OPTIONS

  • locale: Validate specific locale only (e.g., "ko-KR")

  • fix: Auto-fix issues where possible (removes invalid characters)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
slugYesProduct slug
localeNoSpecific locale to validate (default: all locales)
fixNoAuto-fix issues where possible (e.g., remove invalid chars)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (validation against specific rules), mentions auto-fix capabilities via the 'fix' option, and references external documentation. However, it doesn't explicitly state whether this is a read-only or mutation operation, though the 'fix' option implies potential data modification.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (IMPORTANT, WHAT IT VALIDATES, WHEN TO USE, OPTIONS), uses bullet points for readability, and every sentence adds value. It's appropriately sized for a tool with detailed validation rules and usage guidelines, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (validation against multiple platform rules with auto-fix capabilities) and no annotations or output schema, the description does an excellent job explaining what it validates and when to use it. The main gap is lack of information about return values or error handling, which would be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context beyond the schema: it explains that 'locale' validates a specific locale only (with an example 'ko-KR'), clarifies that 'fix' auto-fixes issues like removing invalid characters, and mentions that slug should be resolved via 'search-app' first. This provides valuable semantic understanding of the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Validates ASO data against App Store / Google Play field limits and rules.' It specifies the exact resource (ASO data) and action (validation) with detailed scope (field limits, duplicates, invalid characters). This distinguishes it from sibling tools like 'search-app' or 'improve-public' which have different functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines in the 'WHEN TO USE' section: after running improve-public Stage 1/2, before running public-to-aso, or anytime to check validity. It also references the 'search-app' tool as a prerequisite in the IMPORTANT note. This gives clear context for when to use this tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/quartz-labs-dev/pabal-resource-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server