Studio MCP Hub Site
Server Details
A one-stop creative pipeline for AI agents: generate, upscale, enrich, sign, store, mint. 24 paid MCP tools powered by Stable Diffusion, Imagen 3, ESRGAN, and Gemini — plus 53K+ museum artworks from Alexandria Aeternum. Three payment rails, volume discounts, and a free trial to start.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 27 of 27 tools scored. Lowest: 3.1/5.
Most tools have distinct purposes, but there is some overlap between get_artwork and get_artwork_oracle, which both retrieve artwork metadata with different detail levels, potentially causing confusion. Other tools like enrich_metadata and infuse_metadata also have related but distinct functions, but descriptions help clarify differences.
Tool names generally follow a consistent verb_noun pattern (e.g., check_balance, delete_asset, resize_image), with minor deviations like mockup_image (noun_verb) and get_artwork_oracle (longer compound name). Overall, the naming is readable and predictable, though not perfectly uniform.
With 27 tools, the count is borderline high for a single server, as it covers a broad range of functionalities from artwork retrieval to image processing and compliance. While each tool seems useful, the scope feels heavy and could overwhelm agents, suggesting it might be better split into more focused servers.
The tool set provides comprehensive coverage for digital asset management, artwork analysis, and image processing, including CRUD operations (save_asset, get_asset, list_assets, delete_asset), metadata enrichment, compliance, and various image utilities. No obvious gaps are present for the stated domain.
Available Tools
27 toolsbatch_downloadARead-onlyIdempotentInspect
Bulk download metadata + images from Alexandria Aeternum (min 100 artworks). ($5.00 / 50 GCX)
| Name | Required | Description | Default |
|---|---|---|---|
| offset | No | Start offset for pagination | |
| quantity | No | Number of artworks (min 100) | |
| dataset_id | No | Dataset ID | alexandria-aeternum |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral context absent from annotations: pricing ($5.00 / 50 GCX) and volume constraints (min 100). While annotations cover safety profile (readOnly, idempotent), the description discloses economic and operational boundaries essential for agent decision-making.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Every element earns its place: action (Bulk download), content (metadata + images), source (Alexandria Aeternum), constraint (min 100), and cost ($5.00 / 50 GCX). Front-loaded, zero redundancy, two efficient clauses.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong coverage for a paid bulk operation: discloses cost, source, and volume requirements. Missing only output format details (JSON vs ZIP, pagination structure), though no output schema exists to supplement this. Annotations adequately cover the safety/destructive profile.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured data carries the semantic load. The description reinforces the 'min 100' constraint matching the quantity parameter's minimum behavior but adds no additional parameter syntax or format details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Bulk download metadata + images' provides precise verb and resource, 'Alexandria Aeternum' identifies the dataset, and 'min 100 artworks' clearly distinguishes this high-volume tool from single-fetch siblings like get_artwork.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'min 100 artworks' constraint implies usage (use when needing 100+ items), but lacks explicit guidance on when-not-to-use or specific alternatives like 'use get_artwork for smaller batches.' The cost disclosure ($5.00 / 50 GCX) implicitly guides cost-sensitive decisions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_balanceARead-onlyIdempotentInspect
Check your GCX credit balance, loyalty rewards, and volume tier. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| wallet | Yes | Your EVM wallet address (0x...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral context beyond annotations: explicitly states 'FREE' (cost behavior) and details the three specific data points returned (credit, rewards, tier). No contradiction with readOnlyHint/idempotentHint annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with action and resources. Minor redundancy with repeated '(FREE)' parentheses, but otherwise efficient two-sentence structure with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple read-only query tool. Specifies what data is retrieved (compensating for missing output schema) and annotations cover safety/destructive behavior. Could mention authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description does not mention the 'wallet' parameter, but schema coverage is 100% with clear description ('Your EVM wallet address'). Baseline score appropriate when schema carries full semantic load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Check' plus explicit resources (GCX credit balance, loyalty rewards, volume tier) clearly distinguishes this financial/account tool from the art-focused siblings (resize_image, watermark_embed, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance, prerequisites, or alternatives provided. While self-evident as the sole balance-checking tool among 20+ creative tools, the description lacks explicit selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compliance_manifestARead-onlyIdempotentInspect
Get AB 2013 (California) + EU AI Act Article 53 compliance manifests for dataset usage. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| dataset_id | No | Dataset ID | alexandria-aeternum |
| regulation | No | Filter: ab2013, eu_ai_act, or all | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnlyHint=true, destructiveHint=false, idempotentHint=true). Description adds cost transparency ('FREE') not present in annotations. However, fails to describe what the manifest contains, return format, or compliance validation behavior beyond the structured hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with key action and scope. However, 'FREE. (FREE)' is repetitive and wasteful—violates 'every sentence earns its place'. Otherwise efficiently structured in two short sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 2-parameter retrieval tool with 100% schema coverage and rich annotations. Lacks description of return values, but no output schema exists to guide that expectation; description covers the essential regulatory context (AB 2013, EU AI Act) that annotations omit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with basic descriptions. Description adds semantic value by mapping regulation codes to full names ('AB 2013 (California)', 'EU AI Act Article 53') and contextualizing dataset_id with 'for dataset usage', clarifying the relationship between parameters and real-world regulations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' and identifies exact resources: 'AB 2013 (California) + EU AI Act Article 53 compliance manifests'. Clearly distinguishes from creative/asset siblings (batch_download, resize_image, etc.) by specifying legal/compliance domain and dataset usage context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance provided. Usage is implied by naming specific regulations (AB 2013, EU AI Act), suggesting use for California and EU AI compliance checking, but lacks explicit guidance like 'Use before distributing datasets' or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
convert_color_profileARead-onlyIdempotentInspect
Convert between sRGB and CMYK color profiles. Essential for print production. CMYK output as TIFF with embedded DPI. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| dpi | No | Output DPI (72-1200) | |
| image | Yes | Base64-encoded PNG/JPEG image | |
| target_profile | No | Target color profile | cmyk |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false, establishing safety profile. Description adds crucial behavioral context not in schema: output is TIFF format with embedded DPI. Also notes cost ('FREE'), though repetitive. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Appropriately short and front-loaded with core function. However, 'FREE. (FREE)' is repetitive and doesn't aid tool selection logic. The output format specification is appropriately placed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter conversion tool with complete schema coverage and safety annotations, the description adequately covers purpose, output format constraints, and use case. Minor gap regarding sRGB output format specification, but sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (baseline 3). Description adds value by clarifying that DPI is 'embedded' in the output file and specifying TIFF output format for CMYK conversions, semantic details not explicit in the parameter schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (Convert) and resource (color profiles) with specific standards named (sRGB/CMYK). Distinguishes from general image manipulation siblings by specifying color profile domain. Minor ambiguity: claims 'CMYK output as TIFF' but doesn't clarify output format when converting to sRGB, despite the parameter allowing both directions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('Essential for print production') positioning it relative to use case. However, lacks explicit when-to-use guidance versus siblings like 'print_ready' or 'mockup_image', and no exclusions or prerequisites stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_assetBDestructiveIdempotentInspect
Delete an asset from your wallet storage. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Asset key to delete | |
| wallet | Yes | Your EVM wallet address (0x...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already disclose the destructive, non-read-only, idempotent nature of the tool. The description adds cost information ('FREE') but fails to elaborate on deletion consequences, recovery options, or what 'wallet storage' specifically entails.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief and front-loaded with the action, the duplicated '(FREE)' creates unnecessary noise and structural sloppiness that wastes the agent's context window without adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema and comprehensive annotations, the description is minimally adequate. However, for a destructive wallet operation, it should explicitly mention the permanent nature of the deletion or recovery implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description does not add parameter semantics beyond what the schema already provides (wallet format, key identification).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Delete), resource (asset), and scope (wallet storage), effectively distinguishing it from siblings like save_asset or get_asset. However, the redundant '(FREE)' noise slightly detracts from professional clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites (e.g., verifying ownership via get_asset first) or warnings about the destructive nature of the operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
enrich_metadataAInspect
AI-powered artwork analysis. Two tiers: 'standard' (1 GCX) = SEO-optimized title, description, keywords, alt_text via Nova-Lite. 'premium' (2 GCX, default) = full 8-section Golden Codex museum-grade analysis via Nova/Gemini 2.5 Pro. Optionally customize metadata fields and add a Soul Whisper personal message. ($0.20 / 2 GCX)
| Name | Required | Description | Default |
|---|---|---|---|
| tier | No | Metadata tier: 'standard' (1 GCX — SEO title/description/keywords/alt_text) or 'premium' (2 GCX — full Golden Codex 8-section analysis) | premium |
| image | Yes | Base64-encoded PNG/JPEG image | |
| title | No | Artwork title (leave blank for AI to suggest one) | |
| context | No | Creator's brief — technical/artistic context for AI analysis (e.g. 'SD 3.5 Large, impressionist style') | |
| artist_name | No | Artist/creator name (embedded in metadata) | |
| content_type | No | Content type hint: 'artwork' or 'photo' — affects analysis style | artwork |
| soul_whisper | No | Optional personal message embedded in metadata — visible to anyone who reads the image's provenance (premium tier only) | |
| creation_year | No | 4-digit creation year | |
| copyright_holder | No | Copyright owner name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-idempotent, non-destructive write operation (readOnlyHint: false). The description adds substantial valuable context beyond annotations: specific AI models used (Nova-Lite vs Nova/Gemini 2.5 Pro), exact output structures (8-section Golden Codex), and cost structure (GCX pricing). It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense with no wasted words, efficiently packing tier comparisons, pricing, and optional features into three sentences. It is front-loaded with the core value proposition (AI analysis). Minor deduction for density that could benefit from structural separation of tier details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description comprehensively covers input parameters and processing behavior, there is no output schema provided, and the description fails to specify what data structure the tool returns (JSON object? Metadata fields? Analysis text?). For a complex 9-parameter AI tool, this omission leaves agents uncertain about result handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds synthesis value by mapping tier selections to specific costs (1 GCX vs 2 GCX) and AI models, clarifying the 'premium' default, and contextualizing the 'soul_whisper' feature as a personal message option. This helps agents understand the semantic relationship between the 'tier' parameter and output quality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'AI-powered artwork analysis' with two distinct tiers (standard SEO metadata vs premium museum-grade analysis). It specifies outputs for each tier (Nova-Lite for standard, Nova/Gemini 2.5 Pro for premium). However, it does not explicitly distinguish this from sibling tool 'infuse_metadata', which likely handles non-AI or manual metadata embedding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on selecting between 'standard' (1 GCX) and 'premium' (2 GCX) tiers based on desired output depth (SEO vs Golden Codex analysis). It notes the default is premium and includes pricing context ($0.20). However, it lacks explicit guidance on when to use this AI-analysis tool versus alternatives like 'infuse_metadata' for direct metadata injection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_paletteARead-onlyIdempotentInspect
Extract dominant color palette from an image. Returns hex/RGB/HSL colors with percentages, CSS names, and complementary colors. Great for design systems, mood boards, and color matching. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Base64-encoded PNG/JPEG image | |
| format | No | Color format in output | hex |
| num_colors | No | Number of colors to extract (3-12) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations confirm read-only, idempotent, safe operations. The description adds valuable output structure details ('hex/RGB/HSL colors with percentages, CSS names, and complementary colors') that compensate for the missing output schema, plus pricing information ('FREE'). Does not mention potential rate limits or image size constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently front-loaded with purpose, followed by outputs, use cases, and pricing. The double '(FREE)' notation is slightly redundant, but overall structure respects the agent's attention with no wasted sentences beyond this minor repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately details return values (color formats, percentages, complementary colors) to inform the agent's expectations. Combined with complete input schema documentation and safety annotations, the description provides sufficient context for a 3-parameter analysis tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all three parameters (image encoding, format enum, num_colors range). The description adds no parameter-specific guidance beyond what the schema provides, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a precise action ('Extract dominant color palette') and source ('from an image'), clearly distinguishing it from sibling transformation tools like resize_image or remove_background. The specific mention of 'dominant' and percentages positions it uniquely among the asset analysis tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete use cases ('design systems, mood boards, and color matching') that guide the agent toward appropriate invocation contexts. Lacks explicit 'when not to use' guidance or named alternatives (e.g., versus convert_color_profile), but the positive guidance is sufficiently specific.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_artworkARead-onlyIdempotentInspect
Get Human_Standard metadata (500-1200 tokens) + signed image URL for a museum artwork. ($0.10 / 1 GCX)
| Name | Required | Description | Default |
|---|---|---|---|
| artifact_id | Yes | Artifact ID from search results (e.g. 'met_437419') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the annotations (readOnly, idempotent), the description adds critical behavioral context: the cost per request, the expected output size (500-1200 tokens), and the fact that the image URL is 'signed' (implying temporary/expiring access). These are important operational details not captured in the structured metadata.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely efficient: one sentence covering the action, output format, output size, and cost. Every element—including the parenthetical pricing—earns its place by conveying essential operational constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description compensates by detailing the return value structure (metadata + signed URL) and cost. It adequately covers the tool's behavior for a single-parameter, paid API endpoint, though it could briefly clarify the 'Human_Standard' format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single artifact_id parameter, the schema already fully documents the input requirement. The description does not add additional parameter semantics, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'Human_Standard metadata' and a 'signed image URL' for a 'museum artwork,' specifying both the verb and resource. However, it does not explicitly differentiate from siblings like get_artwork_oracle or get_asset, which also handle artwork/assets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides cost information ('$0.10 / 1 GCX'), which serves as a usage constraint, but offers no explicit guidance on when to choose this tool over alternatives like batch_download (for multiple items) or search_artworks (for finding IDs). The guidance is limited to financial cost awareness.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_artwork_oracleARead-onlyIdempotentInspect
Get Hybrid_Premium 111-field NEST analysis (2K-6K tokens) + image. Deep AI visual analysis with color palette, composition, symbolism, emotional mapping. ($0.20 / 2 GCX)
| Name | Required | Description | Default |
|---|---|---|---|
| artifact_id | Yes | Artifact ID from search results (e.g. 'met_437419') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the annotations indicating read-only and idempotent behavior, the description adds crucial context including the financial cost, approximate token payload size, and specific analytical dimensions (emotional mapping, symbolism). This helps the agent understand resource expenditure and response characteristics not covered by the boolean annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently delivers three distinct value propositions across two sentences: the analysis type and scale (111-field, 2K-6K tokens), the specific content domains (color, composition, symbolism), and the cost structure. Every element serves a distinct purpose without redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the absence of a formal output schema, the description compensates effectively by specifying that the response includes both a substantial text analysis (2K-6K tokens) and an image, while enumerating the analytical categories covered. The explicit pricing information fills a critical gap for a paid tool, though error handling or latency expectations are not mentioned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single artifact_id parameter, the description appropriately relies on the schema's explicit definition ('Artifact ID from search results'). The main description does not supplement parameter semantics, which is acceptable given the schema's completeness but earns only the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool performs a 'Deep AI visual analysis' generating a 'Hybrid_Premium 111-field NEST analysis' covering specific dimensions like color palette, composition, symbolism, and emotional mapping. It clearly distinguishes itself from siblings like get_artwork (basic retrieval) and extract_palette (single feature) by emphasizing the comprehensive 111-field output and premium tier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical cost information ('$0.20 / 2 GCX') and output scale ('2K-6K tokens'), which guides the agent to use this premium tool only when deep analysis is required versus free alternatives. However, it lacks explicit prerequisites (e.g., confirming artifact_id must originate from search_artworks) or direct comparisons to sibling analysis tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_assetBRead-onlyIdempotentInspect
Retrieve a stored asset from your wallet storage by key. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Asset key to retrieve | |
| wallet | Yes | Your EVM wallet address (0x...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable cost information ('FREE') and storage context ('wallet storage') not present in annotations. Consistent with readOnlyHint annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with the main action, but the duplicated '(FREE)' creates noise. The repetition suggests template error rather than intentional emphasis, wasting tokens without adding semantic value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple two-parameter retrieval tool. With comprehensive annotations and full schema coverage, the description provides sufficient context for invocation. No output schema exists, but description doesn't need to explain return values per guidelines.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description reinforces parameters by mentioning 'by key' and 'wallet storage' context, but adds no syntax details, formats, or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Retrieve') and resource ('stored asset'), with scope ('from your wallet storage by key'). Effectively distinguishes from siblings like save_asset, delete_asset, and list_assets. Deduction for redundant 'FREE' text which slightly obscures the core purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus alternatives (e.g., list_assets for browsing, get_artwork for external works). No mention of prerequisites such as asset existence or wallet registration status.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tool_schemaARead-onlyIdempotentInspect
Get the full JSON Schema and usage examples for a specific tool. Use after search_tools to load only what you need. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| tool_name | Yes | Tool name from search_tools results (e.g. 'generate_image') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read-only, idempotent, safe properties. Description adds valuable behavioral context beyond annotations: specifies return content ('full JSON Schema and usage examples'), cost model ('FREE'), and workflow positioning. Does not mention error handling or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely compact three-segment structure: capability (what), workflow guidance (when), and cost hint (context). Every clause delivers distinct value with zero redundancy. Perfectly front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter retrieval tool with rich annotations and no output schema, the description is complete. It covers purpose, sequencing, and operational constraints without needing to describe complex return structures or side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'tool_name' fully documented including an example value ('generate_image'). Description provides no additional parameter semantics, but with complete schema coverage, no compensation is needed. Baseline score appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resources 'JSON Schema and usage examples'. Explicitly positions the tool relative to sibling 'search_tools' via 'Use after search_tools', distinguishing its role in the workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit sequencing guidance ('Use after search_tools') and explains the efficiency rationale ('to load only what you need'). The '(FREE)' tag adds cost context that aids in tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
infuse_metadataAIdempotentInspect
Embed metadata into image via ExifTool. Two modes: 'standard' (XMP/IPTC only — title, description, keywords, copyright) or 'full_gcx' (default — full Golden Codex XMP-gc namespace + IPTC + C2PA + soulmark + hash registration). ($0.10 / 1 GCX)
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Base64-encoded PNG/JPEG image | |
| metadata | Yes | Metadata JSON to embed. For standard: {title, description, keywords, alt_text, copyright_holder}. For full_gcx: Golden Codex JSON from enrich_metadata. | |
| metadata_mode | No | Infusion mode: 'standard' (XMP/IPTC fields only) or 'full_gcx' (full Golden Codex + soulmark + hash registration) | full_gcx |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (idempotent, non-destructive), the description adds critical context: specific metadata standards (XMP-gc, C2PA, IPTC), the concept of 'soulmark' and 'hash registration', and crucial cost information ('$0.10 / 1 GCX'). It does not contradict annotations (write operation correctly matches readOnlyHint=false).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely dense and efficient single sentence. Front-loaded with the core action ('Embed metadata...'), followed by mode specifications with parenthetical details, default indication, and pricing. Every clause delivers essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (involving ExifTool, C2PA, blockchain-adjacent concepts like soulmark, and costing), the description covers the essential behavioral and financial constraints. The 100% schema coverage handles parameter details. Minor gap: no mention of return value format (though no output schema exists to complement).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is met. The description adds conceptual mapping of the 'metadata_mode' enum values to their actual function, and the schema description clarifies the JSON structure expected for each mode. The cost context also helps agents understand the financial implication of the 'full_gcx' default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Embed metadata into image'), the technology used ('via ExifTool'), and distinguishes itself from sibling 'enrich_metadata' by focusing on embedding rather than generating metadata. It explicitly defines the two operational modes and their scope differences.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly delineates the two modes ('standard' vs 'full_gcx') and their respective capabilities. The schema description hints at workflow ('Golden Codex JSON from enrich_metadata'), but the main description lacks explicit guidance on when to choose each mode or prerequisites (e.g., 'use after enrich_metadata').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_assetsBRead-onlyIdempotentInspect
List all assets in your wallet storage with sizes and metadata. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| wallet | Yes | Your EVM wallet address (0x...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, establishing this is a safe read operation. The description adds cost information ('FREE') not present in annotations, which is valuable. However, it lacks disclosure of pagination behavior, rate limits, or what metadata fields are included—behavioral traits an agent would need to handle the response properly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The first sentence is efficient and front-loaded with key information. However, the trailing 'FREE. (FREE)' constitutes wasteful repetition that does not earn its place in the description, suggesting poor editorial structure despite the overall brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter listing tool with good schema coverage and safety annotations, the description adequately covers the basic operation. However, it lacks completeness regarding edge cases (empty wallets), pagination for large result sets, or the structure of returned metadata—gaps that would help an agent interpret results correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'wallet' parameter, the schema already fully documents the input requirements. The description adds no additional parameter semantics (e.g., format validation, example addresses) beyond what the schema provides, warranting the baseline score for complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List[s] all assets in your wallet storage with sizes and metadata', providing specific verb, resource, and scope. The use of 'all' implicitly distinguishes it from sibling 'get_asset' (single retrieval) and 'search_artworks' (filtered search). However, it does not explicitly name alternatives or contrasts, and the redundant 'FREE' text adds noise without clarifying purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_asset' (for single asset retrieval) or 'search_artworks' (for filtered queries). It does not mention prerequisites, performance considerations for large wallets, or when listing all assets is preferable to targeted retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mockup_imageARead-onlyIdempotentInspect
Place your design onto product mockups (t-shirt, poster, canvas, phone case, mug, tote bag). Instant product visualization for e-commerce and print-on-demand. FREE. ($0.10 / 1 GCX)
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Base64-encoded design image | |
| product | No | Product type | tshirt |
| background_color | No | Background hex color | #f5f5f5 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true and idempotentHint=true, the description adds crucial behavioral context not found in structured fields: the pricing model ('FREE. ($0.10 / 1 GCX)'), which informs the agent about computational cost and potential rate limiting concerns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently cover the action, use case, and pricing. Every sentence earns its place with zero redundancy or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the straightforward 3-parameter schema with 100% coverage and comprehensive annotations, the description is complete for tool selection, though it could mention the output format (e.g., image URL or base64) since no output schema is provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description reinforces the product enum values by listing them in prose and frames the image parameter as a 'design,' but does not add significant semantic depth beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Place') and resource ('design onto product mockups'), and explicitly lists supported product types (t-shirt, poster, etc.) that distinguish it from generic image manipulation siblings like resize_image or remove_background.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides context on when to use the tool ('Instant product visualization for e-commerce and print-on-demand'), but lacks explicit guidance on when not to use it or which sibling tools to use instead (e.g., distinguishing from print_ready for production files).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
print_readyARead-onlyIdempotentInspect
Prepare images for professional printing with DPI, bleed margins, crop marks. Supports A4, A3, Letter, poster (24x36), custom sizes. Output as TIFF or PDF. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| dpi | No | Output DPI | |
| image | Yes | Base64-encoded PNG/JPEG image | |
| bleed_mm | No | Bleed margin in mm (0-10) | |
| crop_marks | No | Draw crop marks in bleed area | |
| product_size | No | Standard paper size | a4 |
| output_format | No | Output format | tiff |
| custom_width_mm | No | Custom width in mm (required if product_size=custom) | |
| custom_height_mm | No | Custom height in mm (required if product_size=custom) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish the safety profile (readOnly, non-destructive, idempotent). The description adds valuable domain-specific behavioral context—specifying bleed margins, crop marks, and standard paper sizes—that helps the agent understand this is for physical print production. It does not contradict the readOnly annotation; 'prepare' is appropriately interpreted as generating derivative output.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The structure is logical and front-loaded with the core action. However, the redundant 'FREE. (FREE)' text wastes valuable description space that could have been used for behavioral details or return value documentation, slightly detracting from an otherwise tight presentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich input schema (8 well-documented parameters) and absence of an output schema, the description adequately covers input semantics but fails to describe the return value (presumably base64-encoded TIFF/PDF data). The 'FREE' mention is irrelevant to agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description lists the key parameters (DPI, bleed, crop marks, sizes, formats) but essentially mirrors the schema content without adding syntactic details, validation rules, or domain explanations beyond what's already in the property descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Prepare') and resource ('images'), clearly targeting 'professional printing' use cases. It explicitly lists key print-specific features (DPI, bleed margins, crop marks) that distinguish it from generic image manipulation siblings like resize_image or upscale_image.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While 'professional printing' provides implied context, the description lacks explicit guidance on when to choose this over siblings like resize_image (simple resizing) or convert_color_profile (color management only). No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_hashAIdempotentInspect
Register 256-bit perceptual hash with LSH band indexing for strip-proof provenance. ($0.10 / 1 GCX)
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Base64-encoded PNG/JPEG image |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical cost information ('$0.10 / 1 GCX') absent from annotations. Clarifies technical implementation ('LSH band indexing') beyond the idempotentHint=true annotation. Correctly implies write operation (aligns with readOnlyHint: false). Could improve by stating what constitutes successful registration or return value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence with zero waste. Front-loaded with action ('Register'), followed by technical specs, purpose ('for strip-proof provenance'), and cost. Every clause earns its place; no filler words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a single-parameter tool with good behavioral annotations. Covers function, mechanism, and cost. Minor gap: lacks indication of return value (e.g., hash ID, confirmation status), though absence of output schema reduces burden slightly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage ('Base64-encoded PNG/JPEG image'), establishing baseline 3. Description adds semantic value by implying the image is processed into a perceptual hash (not merely stored), distinguishing from file storage tools. Does not detail encoding constraints or size limits beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Register' (verb) + '256-bit perceptual hash' (resource) + 'LSH band indexing' (method) + 'strip-proof provenance' (domain). Distinguishes from sibling verify_provenance (which likely checks hashes) and save_asset (which stores files). Pricing annotation adds operational context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through technical jargon ('perceptual hash', 'provenance') suggesting digital asset fingerprinting use cases, but lacks explicit when-to-use guidance versus alternatives like verify_provenance or save_asset. No prerequisites stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_walletAIdempotentInspect
Register your wallet to get 10 FREE GCX credits ($1 value). New wallets only — enough to try upscale + enrich. Purchase more via GCX packs. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| wallet | Yes | Your EVM wallet address (0x...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate idempotent, non-destructive write operations. The description adds crucial behavioral context not in annotations: the specific reward amount (10 credits), monetary value ($1), and the 'new wallets only' restriction which implies idempotency behavior (subsequent calls won't grant additional credits).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the value proposition, but suffers from redundant repetition ('FREE' appears three times including '(FREE)'). The marketing emphasis slightly detracts from technical clarity, though the structure remains logically organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter registration tool without output schema, the description adequately covers the value proposition ($1 worth of credits), usage constraints (new wallets), ecosystem integration (upscale/enrich), and next steps (purchasing packs). No significant gaps remain given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the parameter 'wallet' is fully documented in the schema as 'Your EVM wallet address (0x...)'. The description implies the parameter with 'Register your wallet' but doesn't add syntax, format details, or examples beyond what the schema already provides, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (register wallet) and outcome (receive 10 GCX credits worth $1). It effectively distinguishes from sibling tools like check_balance by emphasizing the credit acquisition aspect and explicitly mentions related tools (upscale, enrich) that consume these credits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear constraint 'New wallets only' indicating when not to use (if already registered). Suggests alternative for obtaining more credits via 'GCX packs' and contextualizes usage by stating credits are 'enough to try upscale + enrich', linking to sibling tool requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
remove_backgroundARead-onlyIdempotentInspect
Remove image background using AI (U2-Net). Returns RGBA PNG/WebP with transparent background. Perfect for product photos, portraits, and design assets. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Base64-encoded PNG/JPEG image | |
| output_format | No | Output format | png |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnlyHint, destructiveHint), but description adds valuable behavioral context: output format specifics ('RGBA PNG/WebP with transparent background'), AI model disclosure ('U2-Net'), and cost information ('FREE') not present in structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded and efficiently structured (method → output → use cases → cost). Minor redundancy with repeated 'FREE. (FREE)' prevents a perfect score, but overall every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description adequately compensates by specifying return format (RGBA PNG/WebP). Combined with comprehensive annotations and clear parameter documentation, the tool is sufficiently specified for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100%, the description adds semantic meaning to the 'output_format' parameter by explaining it produces RGBA transparency—a key detail missing from the schema's generic 'Output format' description. Also reinforces input requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb ('Remove') + resource ('image background') + implementation detail ('using AI (U2-Net)'), clearly distinguishing it from sibling tools like upscale_image, vectorize_image, or watermark_embed which perform different transformations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context ('Perfect for product photos, portraits, and design assets') indicating ideal use cases, though it does not explicitly name sibling tools to avoid or provide negative constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resize_imageARead-onlyIdempotentInspect
Resize an image to target dimensions. Supports fit modes: 'cover' (crop to fill), 'contain' (fit within, letterbox), 'stretch' (exact size). Useful for preparing images for specific platforms, thumbnails, or social media. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Resize mode: 'contain' (fit within bounds, preserve aspect ratio), 'cover' (crop to fill), 'stretch' (exact size, may distort) | contain |
| image | Yes | Base64-encoded PNG/JPEG image | |
| width | Yes | Target width in pixels (1-8192) | |
| format | No | Output format | png |
| height | Yes | Target height in pixels (1-8192) | |
| quality | No | JPEG/WebP quality (1-100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only/idempotent/safe operation. Description adds valuable behavioral specifics: semantic explanations of fit modes ('crop to fill', 'letterbox', 'stretch'), and discloses cost ('FREE') not present in annotations. No contradictions with annotation safety hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with action-first sentence followed by mode details and use cases. Front-loaded with essential information. Deducted one point for redundant 'FREE. (FREE)' text at end which adds no semantic value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% input schema coverage and good annotations, input side is well-covered. However, no output schema exists and description fails to specify return format (base64 string? binary data? URL?), leaving agents uncertain about result handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description enriches mode parameters with synonyms ('letterbox') and practical implications, but does not add critical details beyond schema (e.g., no format-specific guidance for quality param, no base64 size warnings).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Resize') and target resource ('image') with clear dimension targeting. Explains fit modes (cover/contain/stretch) which helps distinguish it from generic transformation siblings like upscale_image or vectorize_image. 'FREE' text is noise but doesn't obscure purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides positive usage contexts ('preparing images for specific platforms, thumbnails, or social media') but lacks explicit when-not-to-use guidance or differentiation from similar siblings like upscale_image (which also changes dimensions) or mockup_image.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_assetAIdempotentInspect
Save an image or data to your personal wallet storage. 100MB free per wallet, 500 assets max. ($0.10 / 1 GCX)
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Unique name for this asset (e.g., 'my-landscape', 'pipeline-001') | |
| data | Yes | Base64-encoded data (image, JSON, etc.) — max 10MB | |
| wallet | Yes | Your EVM wallet address (0x...) | |
| metadata | No | Optional metadata JSON to store alongside | |
| content_type | No | MIME type | image/png |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral context beyond annotations: storage limits (100MB/500 assets) and pricing model ($0.10 / 1 GCX) that annotations don't cover. Aligns with idempotentHint=true by implying replaceable storage via 'save' semantics. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first declares function, second states constraints/costs. Front-loaded with action verb. Pricing and limits are essential for agent decision-making, earning their place. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and annotations indicating idempotent/non-destructive behavior, description is nearly complete. Missing only: explicit mention of return value behavior (does it return the key, a hash, or success boolean?) and overwrite confirmation (though implied by idempotentHint).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing comprehensive parameter documentation. Description adds context that 'data' accepts images or other data types, but doesn't elaborate on parameter interactions (e.g., how content_type affects validation or how metadata is indexed). Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Save' plus clear resource 'image or data to your personal wallet storage' distinguishes this from sibling processing tools (remove_background, resize_image) and retrieval tools (get_asset, list_assets). The scope is precisely bounded by the wallet context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit guidance through 'wallet storage' context and cost constraints ($0.10 / 1 GCX), suggesting use for persistent storage vs temporary processing. However, lacks explicit when-to-use guidance vs siblings like delete_asset or list_assets, and doesn't mention prerequisites (e.g., wallet registration).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_artworksARead-onlyIdempotentInspect
Search 53K+ museum artworks from Alexandria Aeternum (MET, Chicago, NGA, Rijksmuseum, Smithsonian, Cleveland, Paris). FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (1-100) | |
| query | Yes | Search query (e.g. 'impressionist landscape', 'Monet', 'Dutch Golden Age') | |
| museum | No | Filter by museum (met, chicago, nga, rijks, smithsonian, cleveland, paris) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read-only, idempotent, non-destructive safety. Description valuably adds cost information ('FREE'), dataset scale ('53K+'), and data provenance (specific museum list) that annotations don't cover. No contradictions with structured hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with action and scope. Length is appropriate for complexity. Minor deduction for redundant '(FREE)' repetition which adds no value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a 3-parameter search tool with good annotations. Describes data source and scale adequately. Lacks output description, but no output schema exists and rules state description needn't explain return values in that case.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all three parameters (query, limit, museum). The description adds no additional parameter semantics, but per rubric baseline is 3 when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Search') and resource ('museum artworks') with specific scope (53K+ items from named museums including MET, Chicago, etc.). Implicitly distinguishes from sibling 'get_artwork' by emphasizing the broad, multi-museum aggregation aspect, though explicit differentiation would strengthen it further.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like 'get_artwork' (likely single-item retrieval) or 'batch_download'. No mention of prerequisites, query syntax tips, or filtering strategies beyond the schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_toolsARead-onlyIdempotentInspect
Discover available tools by category or price without loading all schemas. Start here to save tokens. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Search query to filter tools by name or description | |
| category | No | Filter by category | all |
| max_price_usd | No | Max price per call in USD (0 = free only) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds cost information '(FREE)' and token efficiency context beyond annotations. However, given no output schema exists, the description should disclose return format (e.g., list of tool summaries) but omits this behavioral detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: first establishes purpose and differentiation, second gives workflow guidance, third states cost. Front-loaded with core function. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with 100% schema coverage and complete annotations, description adequately covers purpose, cost, and workflow positioning. However, lacks return value specification which is needed given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description references 'category or price' reinforcing those parameters, but adds no syntax details, examples, or semantics for the 'query' parameter beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Discover' with clear resource 'available tools' and scope 'by category or price'. The phrase 'without loading all schemas' effectively differentiates from sibling tool get_tool_schema, while 'tools' clearly distinguishes from search_artworks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Start here to save tokens' indicating workflow position as entry point. Implicitly suggests alternative ('without loading all schemas' hints to use get_tool_schema for full schemas), though it doesn't explicitly name the alternative or state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upscale_imageAIdempotentInspect
Super-resolution using Real-ESRGAN on NVIDIA L4 GPU. 5 models for different content types. Default: 2x general upscale. ($0.20 / 2 GCX)
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Base64-encoded PNG/JPEG image | |
| model | No | ESRGAN model to use. Options: 'realesrgan_x2plus' (2x, general — default), 'realesrgan_x4plus' (4x, general/photo), 'realesrgan_x4plus_anime' (4x, anime/illustrations), 'realesr_general_x4v3' (4x, fast general), 'realesr_animevideov3' (4x, anime video frames). | realesrgan_x2plus |
| scale | No | Shorthand: 2 selects x2plus, 4 selects x4plus. Ignored if model is specified directly. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare idempotent/non-destructive hints; description adds critical cost information ('$0.20 / 2 GCX'), hardware constraints (L4 GPU), and algorithm specifics (Real-ESRGAN) that help the agent understand processing time and billing implications beyond the safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four efficient clauses: technology, model variety, default behavior, and pricing. Every element earns its place; no redundancy with schema or annotations. Perfectly front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter AI tool with full schema coverage, the description adequately covers algorithm, cost, and hardware context. Minor gap: doesn't specify output format (base64 vs asset ID), though this is somewhat implied by the input schema pattern.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already comprehensively documents all parameters including enum values and defaults. Description provides a compact summary ('Default: 2x general') but doesn't add syntax or semantic details beyond what's in the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Super-resolution using Real-ESRGAN' — specific verb, algorithm, and hardware (NVIDIA L4 GPU). It clearly distinguishes from siblings like resize_image (simple scaling) and vectorize_image by specifying AI-based upscaling technology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions '5 models for different content types' and 'Default: 2x general upscale' which implies selection criteria, but lacks explicit when-to-use guidance vs. sibling resize_image or warnings about computational cost/latency compared to simple resizing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vectorize_imageARead-onlyIdempotentInspect
Convert raster images to SVG vector format. Supports color and binary modes with precision controls. Returns raw SVG XML string. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Vectorization mode | color |
| image | Yes | Base64-encoded PNG/JPEG image | |
| filter_speckle | No | Speckle filter (0-100, higher = fewer small artifacts) | |
| color_precision | No | Color clustering precision (1-10, higher = more colors) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnlyHint, idempotentHint, destructiveHint). Description adds critical behavioral context not in annotations: return format ('Returns raw SVG XML string') and operational modes. No contradiction with annotations; 'Convert' here refers to stateless transformation consistent with readOnlyHint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded structure with purpose first, then features, then output format. Efficient except for redundant 'FREE. (FREE)' which wastes characters without adding functional value. Otherwise zero fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, description appropriately compensates by specifying return format ('raw SVG XML string'). Covers all 4 parameters conceptually. With 100% schema coverage and good annotations, this is sufficient, though error handling or size limits could enhance it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with detailed parameter descriptions. Description adds semantic value by grouping filter_speckle and color_precision as 'precision controls' and explicitly surfacing the mode options ('color and binary') as features. Exceeds baseline of 3 by providing conceptual grouping beyond raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb-noun structure ('Convert raster images to SVG vector format') that clearly identifies the operation, source format, and target format. Distinct from sibling tools like resize_image, remove_background, or upscale_image by specifying vectorization to SVG.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context through 'color and binary modes' and 'precision controls,' but lacks explicit when-to-use guidance or named alternatives. Does not clarify when to use this versus upscale_image or other image processing siblings in the provided list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_provenanceBRead-onlyIdempotentInspect
Strip-proof provenance verification via Aegis hash index. FREE - no payment required. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Base64-encoded PNG/JPEG image |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover read-only, idempotent, and non-destructive traits. The description adds value by disclosing the 'Aegis hash index' implementation detail and cost model, but fails to describe what constitutes successful verification or the return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but suffers from redundant repetition ('FREE' mentioned twice, including in parenthesis). The parenthetical '(FREE)' adds no value beyond the preceding text, indicating poor editing rather than efficient communication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read operation with comprehensive annotations, the description adequately covers the basics. However, it lacks explanation of return values or verification failure modes, which would be necessary for a complete understanding of the tool's behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Base64-encoded PNG/JPEG image'), the schema fully documents the single parameter. The description adds no supplemental context about the image parameter, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action (provenance verification) and the specific method (via Aegis hash index). 'Strip-proof' adds technical specificity about the verification type, distinguishing it from generic verification tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it notes the cost model ('FREE - no payment required'), it lacks guidance on when to use this versus siblings like 'register_hash' (likely the write counterpart) or 'watermark_detect'. No prerequisites or exclusion criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
watermark_detectARead-onlyIdempotentInspect
Detect and extract invisible DCT watermark from an image. Returns the embedded text payload if found. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Base64-encoded PNG/JPEG image |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations cover the safety profile (readOnlyHint, destructiveHint), the description adds crucial behavioral context: the specific algorithm (DCT), the return format (text payload), and cost implications ('FREE'). It does not contradict any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief and front-loaded with the core action. However, it contains redundant text ('FREE. (FREE)') that wastes space without adding information, preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter detection tool, the description is complete. It compensates for the missing output schema by specifying the return value ('embedded text payload') and specifies the watermark type (DCT), giving the agent sufficient context to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'image' parameter, the schema already fully documents the input requirements. The description mentions 'from an image' but adds no additional semantic context about encoding requirements or validation rules beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Detect and extract') and identifies the exact resource (invisible DCT watermark) and source (image). It clearly distinguishes from sibling 'watermark_embed' by specifying the inverse operation (extraction vs. embedding).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied through the action verbs ('detect/extract' vs the sibling's 'embed'), but there is no explicit guidance on when to choose this over 'verify_provenance' or other integrity tools, nor does it explicitly name 'watermark_embed' as the alternative for writing watermarks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
watermark_embedARead-onlyIdempotentInspect
Embed invisible DCT-domain watermark into an image. Encodes a text payload into luminance channel frequency coefficients. Survives light compression. FREE. (FREE)
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Base64-encoded PNG/JPEG image | |
| payload | Yes | Text payload to embed (max 256 chars) | |
| strength | No | Embedding strength (0.1-1.0, higher = more robust but more visible) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable technical context beyond annotations: specifies DCT-domain processing, luminance channel targeting, and compression resilience characteristics. Includes cost information ('FREE') not present in annotations. Does not contradict readOnlyHint/idempotentHint annotations—'embed' refers to data transformation, not server state mutation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear front-loading (purpose first, then technical details, then constraints/cost). Minor deduction for redundant '(FREE)' repetition. Otherwise efficient—every sentence conveys distinct information (method, mechanism, robustness, cost).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for input parameters given 100% schema coverage and good annotations. However, lacks description of return value (watermarked image data) despite having no output schema. For a specialized steganography tool, mention of output format would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents all parameters (image encoding, payload constraints, strength range). Description reinforces 'text payload' concept but does not add syntax or semantic details beyond the schema. Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states exact operation ('Embed invisible DCT-domain watermark'), target resource ('image'), and technical methodology ('Encodes a text payload into luminance channel frequency coefficients'). Clearly distinguishes from sibling 'watermark_detect' by focusing exclusively on embedding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context through 'Survives light compression,' indicating appropriate use cases (robustness needs). However, lacks explicit when-to-use guidance versus alternatives or prerequisites (e.g., does not mention using watermark_detect for verification).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!