Skip to main content
Glama
Ownership verified

Server Details

32 creative AI tools (18 free) for agents: generate, upscale, mockup, print, watermark.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
codex-curator/studiomcphub
GitHub Stars
2
Server Listing
Studio MCP Hub

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

27 tools
batch_downloadA
Read-onlyIdempotent
Inspect

Bulk download metadata + images from Alexandria Aeternum (min 100 artworks). ($5.00 / 50 GCX)

ParametersJSON Schema
NameRequiredDescriptionDefault
offsetNoStart offset for pagination
quantityNoNumber of artworks (min 100)
dataset_idNoDataset IDalexandria-aeternum
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial cost information absent from annotations, notes the minimum 100 artwork constraint, and clarifies the dual output type (metadata + images). Annotations cover safety profile (readOnly/idempotent), but description could further clarify output format (ZIP vs streaming) or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences front-loaded with purpose; zero waste. Price information is appended appropriately as a parenthetical constraint. Every element earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Rich annotations (readOnly, idempotent, openWorld) and complete parameter coverage reduce descriptive burden. Output schema absence is partially mitigated by stating 'download' and content types (metadata+images), though explicit return format (file handle, URL, etc.) would strengthen completeness given the transactional cost.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds value by mapping 'dataset_id' to specific source 'Alexandria Aeternum' and reinforcing the 'min 100' constraint for quantity parameter. Cost disclosure also signals that quantity parameter has financial scaling implications.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('download'), resource ('metadata + images'), source ('Alexandria Aeternum'), and scope ('Bulk', 'min 100 artworks'). Effectively distinguishes from singleton fetch tools like get_artwork via the bulk/minimum constraints.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides critical cost constraint '($5.00 / 50 GCX)' which governs when to use, and notes the minimum quantity requirement. Lacks explicit contrast with singleton alternatives (e.g., 'use get_artwork for single items'), but the constraints imply appropriate usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_balanceA
Read-onlyIdempotent
Inspect

Check your GCX credit balance, loyalty rewards, and volume tier. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
walletYesYour EVM wallet address (0x...)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial cost information ('FREE. (FREE)') not present in annotations, and discloses return values (balance, rewards, tier). Annotations cover read-only/idempotent safety properties, description complements this with economic behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded purpose statement followed by cost indicator. Slight redundancy with repeated '(FREE)' but every element conveys distinct information (function + cost).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple read-only query. Compensates for missing output schema by enumerating return values (credit balance, rewards, tier).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage describing the 'wallet' parameter. Description adds no parameter details but meets baseline expectations since schema fully documents the single input field.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Check' with clear resources 'GCX credit balance, loyalty rewards, and volume tier'. Uniquely identifies this as a financial/account query tool distinct from the image/asset manipulation siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through 'FREE' notation suggesting zero-cost operation, but lacks explicit when-to-use guidance or alternatives comparison. Context is clear enough given unique function among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compliance_manifestB
Read-onlyIdempotent
Inspect

Get AB 2013 (California) + EU AI Act Article 53 compliance manifests for dataset usage. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
dataset_idNoDataset IDalexandria-aeternum
regulationNoFilter: ab2013, eu_ai_act, or allall
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly/idempotent/destructive hints). Description adds valuable domain-specific scope (which specific regulations are supported) and implies no cost. Does not describe manifest format, pagination, or caching behavior, but meets minimum bar given strong annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Core information is front-loaded and efficient, but includes redundant '(FREE)' repetition at end that wastes tokens without adding value. Single sentence structure is appropriate for tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for scope: identifies specific regulatory frameworks covered (AB 2013, EU AI Act Art 53), mentions dataset context, and suffices for a read-only retrieval tool with strong annotations. No output schema exists, but description adequately covers intent for a manifest retrieval operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage ('Dataset ID' and filter options documented). Description mentions 'dataset usage' and regulation names which map to parameters but adds no syntax, format constraints, or semantic meaning beyond schema definitions. Baseline 3 appropriate given schema carries full load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific action (Get) and resource (AB 2013 + EU AI Act Article 53 compliance manifests). Explicitly distinguishes from creative/asset-manipulation siblings by specifying legal/compliance domain. Docked one point for redundant 'FREE' text which adds noise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to invoke vs alternatives, prerequisites (e.g., needing specific dataset permissions), or conditions for selecting specific regulation filters. Only states what it does, not when to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_color_profileA
Read-onlyIdempotent
Inspect

Convert between sRGB and CMYK color profiles. Essential for print production. CMYK output as TIFF with embedded DPI. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
dpiNoOutput DPI (72-1200)
imageYesBase64-encoded PNG/JPEG image
target_profileNoTarget color profilecmyk
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent status. Description adds valuable behavioral context: CMYK outputs as TIFF format (not mentioned in schema), DPI embedding behavior, and print production use case. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Core information is front-loaded, but 'FREE. (FREE)' is redundant noise that wastes space without adding technical value. Otherwise appropriately compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple 3-parameter schema with full coverage and helpful annotations, description adequately covers conversion behavior, output format specifics, and intended use case. No output schema exists but description sufficiently indicates return type via format mention.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description enhances by specifying that CMYK conversion produces TIFF output and that DPI is embedded in the result—details beyond the schema's basic 'Output DPI' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Convert) and resource (color profiles between sRGB/CMYK) and output format (TIFF). Mentions print production domain. However, does not explicitly distinguish from similar sibling tools like 'print_ready' or 'mockup_image'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides domain context ('Essential for print production') implying when to use, but lacks explicit when-not guidance or named alternatives. No comparison to related image processing siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_assetC
DestructiveIdempotent
Inspect

Delete an asset from your wallet storage. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesAsset key to delete
walletYesYour EVM wallet address (0x...)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructive/idempotent hints, so description only needs to add context. It adds cost information ('FREE') and wallet storage location, but omits critical behavioral details like permanent data loss, recovery policies, or side effects on linked records.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Contains redundant repetition ('FREE. (FREE)') that wastes tokens without adding information. The core sentence is efficient, but the parenthetical duplication and period fragmentation indicate poor editing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 2-parameter destructive operation with complete schema coverage and behavioral annotations (destructiveHint=true). Missing explicit warning about permanence, but the 'Delete' verb + destructive annotation provide minimal viable safety context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('Asset key to delete', 'Your EVM wallet address'), establishing baseline 3. Description adds 'wallet storage' context slightly reinforcing the wallet parameter semantics but adds nothing about key format or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Delete) + resource (asset) + location context (wallet storage), distinguishing it from sibling read operations like get_asset or list_assets. The 'FREE' repetition is distracting but doesn't obscure the core purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use versus alternatives (e.g., when to delete vs. archive), no prerequisites (e.g., verify ownership first), and no warnings about irreversibility despite the destructive nature.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

enrich_metadataAInspect

AI-powered artwork analysis. Two tiers: 'standard' (1 GCX) = SEO-optimized title, description, keywords, alt_text via Nova-Lite. 'premium' (2 GCX, default) = full 8-section Golden Codex museum-grade analysis via Nova/Gemini 2.5 Pro. Optionally customize metadata fields and add a Soul Whisper personal message. ($0.20 / 2 GCX)

ParametersJSON Schema
NameRequiredDescriptionDefault
tierNoMetadata tier: 'standard' (1 GCX — SEO title/description/keywords/alt_text) or 'premium' (2 GCX — full Golden Codex 8-section analysis)premium
imageYesBase64-encoded PNG/JPEG image
titleNoArtwork title (leave blank for AI to suggest one)
contextNoCreator's brief — technical/artistic context for AI analysis (e.g. 'SD 3.5 Large, impressionist style')
artist_nameNoArtist/creator name (embedded in metadata)
content_typeNoContent type hint: 'artwork' or 'photo' — affects analysis styleartwork
soul_whisperNoOptional personal message embedded in metadata — visible to anyone who reads the image's provenance (premium tier only)
creation_yearNo4-digit creation year
copyright_holderNoCopyright owner name
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Substantial behavioral disclosure beyond annotations: specifies pricing ($0.20/2 GCX, GCX costs), underlying AI models (Nova-Lite, Gemini 2.5 Pro), output visibility ('visible to anyone who reads provenance'), and tier-specific deliverables (8-section Golden Codex vs SEO fields). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information-dense with zero waste. Front-loads core value proposition (AI analysis), mid-section explains tier differentiation with costs, closing covers optional customizations. Single-paragraph structure that progresses logically from mandatory concept to optional details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Robust coverage for complex 9-parameter tool with nested objects. Explains expected outputs (8-section analysis vs SEO metadata) since no output schema exists. Pricing transparency is critical for cost-bearing operations. Minor gap: could specify return format (JSON structure) given lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3 applies. Description adds semantic grouping ('customize metadata fields', 'Soul Whisper personal message') that contextualizes parameter relationships, but individual parameter details (types, defaults) are fully documented in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'analysis' and resource 'artwork' clearly stated. Distinctly positions against siblings by emphasizing 'AI-powered' generation of SEO metadata vs 'museum-grade analysis', distinguishing it from manual tools like `infuse_metadata` or simple extraction tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clear internal guidance on when to use 'standard' (1 GCX, SEO focus) vs 'premium' (2 GCX, full analysis) tiers including cost trade-offs. Lacks explicit comparison to sibling `infuse_metadata` (manual vs AI-generated metadata), but implies AI automation through Nova/Gemini model mentions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

extract_paletteA
Read-onlyIdempotent
Inspect

Extract dominant color palette from an image. Returns hex/RGB/HSL colors with percentages, CSS names, and complementary colors. Great for design systems, mood boards, and color matching. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
imageYesBase64-encoded PNG/JPEG image
formatNoColor format in outputhex
num_colorsNoNumber of colors to extract (3-12)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, idempotent, non-destructive safety properties. The description adds valuable output context by detailing the specific data returned ('hex/RGB/HSL colors with percentages, CSS names, and complementary colors') which compensates for the missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Follows strong front-loading with the core action first, followed by outputs and use cases. Deduction for the redundant 'FREE. (FREE)' closing which wastes tokens without adding semantic value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description appropriately dedicifies space to explaining return values. Combined with comprehensive annotations covering safety properties and clear use cases, the description is complete for this tool's complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents all parameters (image encoding, format enum, num_colors range). The description does not add syntax details or parameter interdependencies beyond the schema, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Extract') and clear resource ('dominant color palette from an image'), immediately distinguishing it from sibling image manipulation tools like upscale_image or remove_background which transform pixels rather than analyze them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual guidance through specific use cases ('design systems, mood boards, and color matching') that help an agent understand appropriate invocation scenarios, though it lacks explicit 'when not to use' language or sibling comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_artworkA
Read-onlyIdempotent
Inspect

Get Human_Standard metadata (500-1200 tokens) + signed image URL for a museum artwork. ($0.10 / 1 GCX)

ParametersJSON Schema
NameRequiredDescriptionDefault
artifact_idYesArtifact ID from search results (e.g. 'met_437419')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly/idempotent/destructive), but description adds critical behavioral context: exact cost pricing, response size (500-1200 tokens), and URL type (signed/temporary). These are crucial operational details not present in structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single dense sentence with zero waste. Front-loaded with action ('Get'), followed by format specification, then output artifacts (metadata + signed URL), ending with cost—every clause delivers essential information for tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Good coverage given no output schema: describes cost, response size, metadata standard, and image URL type. Could improve by explicitly stating prerequisite workflow with search_artworks (implied only in parameter schema description), but adequate for operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single artifact_id parameter. The description references 'museum artwork' which contextualizes the parameter, but does not add syntax, format constraints, or behavioral semantics beyond what the schema already provides. Baseline 3 appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with specific resource 'Human_Standard metadata' and scope 'museum artwork'. The inclusion of cost ($0.10 / 1 GCX) and token count (500-1200) distinguishes it from siblings like get_asset (likely generic) and search_artworks (likely list-based).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lacks explicit 'when to use vs alternatives' text, but the cost disclosure ($0.10 / 1 GCX) provides implicit usage guidance that this is a paid operation requiring budget consciousness. The token range implies use-case fit (detailed metadata needs).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_artwork_oracleA
Read-onlyIdempotent
Inspect

Get Hybrid_Premium 111-field NEST analysis (2K-6K tokens) + image. Deep AI visual analysis with color palette, composition, symbolism, emotional mapping. ($0.20 / 2 GCX)

ParametersJSON Schema
NameRequiredDescriptionDefault
artifact_idYesArtifact ID from search results (e.g. 'met_437419')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With annotations already declaring read-only and idempotent behavior, the description adds critical cost information ($0.20 / 2 GCX), output magnitude (2K-6K tokens), and composition (+ image). It clarifies specific analysis domains (symbolism, emotional mapping) not covered by the safety-focused annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single high-density sentence delivering product name, scope (111-field), size (2K-6K tokens), specific capabilities (color, composition, symbolism), and pricing. Every element earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description partially compensates by indicating output includes an image and extensive token counts. It adequately covers the tool's complexity and cost structure, though it could clarify return format (JSON vs text) for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage for the single artifact_id parameter, the schema sufficiently documents inputs. The description neither repeats nor adds to parameter semantics, warranting the baseline score of 3 for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('Get') and resource ('Hybrid_Premium 111-field NEST analysis') plus deliverables ('+ image'). It specifies depth (2K-6K tokens) and analysis dimensions (color, composition, symbolism), functionally distinguishing from simpler siblings like get_artwork and extract_palette through scope description, though it does not explicitly name those alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage tier through the cost disclosure ('$0.20 / 2 GCX'), suggesting this is for deep analysis needs rather than casual lookups. However, it lacks explicit guidance on when to choose this versus get_artwork or extract_palette, and provides no prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_assetB
Read-onlyIdempotent
Inspect

Retrieve a stored asset from your wallet storage by key. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesAsset key to retrieve
walletYesYour EVM wallet address (0x...)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. Description adds cost information ('FREE') which annotations lack, and confirms scope ('wallet storage'). Adds minimal behavioral context beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Short and front-loaded. However, 'FREE. (FREE)' is redundant repetition of the same parenthetical, wasting tokens without adding clarity. Single sentence plus repetition could be cleaner.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 2-parameter read operation with good annotations and full schema coverage. Missing return value description, but no output schema exists so this is acceptable. Could mention it returns the asset content/data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, baseline is 3. Description mentions 'by key' and 'wallet storage' which map to parameters, but adds no syntax details, format constraints, or examples beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Retrieve') + resource ('stored asset') + scope ('wallet storage') + identifier ('by key'). Distinguishes from 'list_assets' by implying singular retrieval, though doesn't explicitly differentiate from 'get_artwork' or 'batch_download' siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this vs siblings like list_assets (enumeration), batch_download (bulk), or get_artwork (possibly public). No prerequisites or conditions mentioned despite this being a wallet-specific operation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tool_schemaA
Read-onlyIdempotent
Inspect

Get the full JSON Schema and usage examples for a specific tool. Use after search_tools to load only what you need. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
tool_nameYesTool name from search_tools results (e.g. 'generate_image')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent/destructive status. Description adds cost information '(FREE)' not present in annotations and clarifies that 'usage examples' are included in the return, adding modest behavioral context beyond structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first states purpose, second gives usage guideline. Front-loaded with the verb, parenthetical '(FREE)' efficiently appended without disrupting flow. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter retrieval tool with strong annotations and no output schema, description adequately covers intent, workflow, and cost. Could marginally improve by hinting at output structure, but sufficient given schema simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single parameter. Description mentions 'specific tool' which maps to the tool_name parameter, but adds no syntax details or format guidance beyond what the schema already provides. Baseline 3 appropriate given complete schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'JSON Schema and usage examples' makes purpose unambiguous. Distinguishes from sibling search_tools (which finds tools) by focusing on retrieving detailed schema for a specific tool already identified.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit workflow guidance 'Use after search_tools to load only what you need' establishes clear sequencing and efficiency rationale. Names the sibling tool directly, clarifying when to invoke this tool in the discovery workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

infuse_metadataA
Idempotent
Inspect

Embed metadata into image via ExifTool. Two modes: 'standard' (XMP/IPTC only — title, description, keywords, copyright) or 'full_gcx' (default — full Golden Codex XMP-gc namespace + IPTC + C2PA + soulmark + hash registration). ($0.10 / 1 GCX)

ParametersJSON Schema
NameRequiredDescriptionDefault
imageYesBase64-encoded PNG/JPEG image
metadataYesMetadata JSON to embed. For standard: {title, description, keywords, alt_text, copyright_holder}. For full_gcx: Golden Codex JSON from enrich_metadata.
metadata_modeNoInfusion mode: 'standard' (XMP/IPTC fields only) or 'full_gcx' (full Golden Codex + soulmark + hash registration)full_gcx
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical non-annotation details: pricing model, specific technologies (ExifTool, C2PA), and exact namespace scopes for each mode. Aligns correctly with annotations (idempotentHint:true matches re-embedding behavior, destructiveHint:false matches non-destructive metadata writing). Could clarify if operation returns new bytes or modifies in-place.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three information-dense components: (1) Action + technology, (2) Mode comparison with scoping details, (3) Cost disclosure. No repetition of parameter names, zero filler. Front-loaded with primary verb.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent coverage for a 3-parameter mutation tool: explains mode differences, references prerequisite tool (enrich_metadata), and discloses costs. With no output schema, appropriately describes behavioral scope. Minor gap: doesn't specify if output is returned bytes or server-side reference.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds value by enumerating specific standard fields (title, description, keywords, copyright) in text and clarifying the 'Golden Codex' ecosystem relationship. Cost disclosure ('$0.10 / 1 GCX') contextualizes the 'full_gcx' default mode parameter choice.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Embed metadata'), target resource ('image'), and implementation technology ('ExifTool'). Distinguishes from siblings by defining its relationship to 'enrich_metadata' (source of Golden Codex JSON) and mentioning GCX-specific features (soulmark, hash registration) that siblings like 'register_hash' or 'watermark_embed' handle separately.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit mode selection criteria ('standard' vs 'full_gcx') with cost implications ('$0.10 / 1 GCX'). Implies workflow chain with 'enrich_metadata' by referencing it as the source for full_gcx JSON. Lacks explicit 'when not to use' guidance (e.g., vs 'watermark_embed' for visible marking only).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_assetsB
Read-onlyIdempotent
Inspect

List all assets in your wallet storage with sizes and metadata. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
walletYesYour EVM wallet address (0x...)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent/destructive status. Description adds cost information ('FREE') and hints at return payload contents ('sizes and metadata'), adding value beyond structured fields. Does not mention pagination or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with key action and scope, but contains redundant repetition ('FREE. (FREE)'). Otherwise appropriately brief for tool complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficient for a simple list operation with strong annotations and complete schema. Mentioning 'sizes and metadata' compensates partially for missing output schema. Could note pagination behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter description in schema ('Your EVM wallet address'). Description makes no mention of parameters, but baseline 3 is appropriate given complete schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (List), resource (assets), scope (wallet storage), and return attributes (sizes and metadata). Effectively distinguishes from sibling 'get_asset' by emphasizing 'all assets'. The 'FREE' repetition is odd but doesn't obscure purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus 'get_asset' or 'search_artworks', nor prerequisites like wallet registration. Simply states what the tool does, not when to choose it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mockup_imageA
Read-onlyIdempotent
Inspect

Place your design onto product mockups (t-shirt, poster, canvas, phone case, mug, tote bag). Instant product visualization for e-commerce and print-on-demand. FREE. ($0.10 / 1 GCX)

ParametersJSON Schema
NameRequiredDescriptionDefault
imageYesBase64-encoded design image
productNoProduct typetshirt
background_colorNoBackground hex color#f5f5f5
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true; description adds critical behavioral info not in annotations: cost structure ('FREE. ($0.10 / 1 GCX)'). This credit consumption warning is essential for agent decision-making. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each earning its place: function statement with examples, use case context, and cost disclosure. Front-loaded with core action, no redundancy, efficient density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Well-covered for a simple 3-parameter tool: describes function, target users (e-commerce/POD), and cost. Minor gap: no output schema exists, and description could briefly indicate that it returns/outputs a mockup image file, though this is somewhat inferable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for all three parameters. Description enumerates product types matching the enum values, serving as reinforcement, but adds no additional semantic depth (e.g., image format requirements, color syntax details) beyond what the schema already provides. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb+resource combination ('Place your design onto product mockups') with concrete examples (t-shirt, poster, canvas, phone case, mug, tote bag). Clearly distinguishes from sibling tools like remove_background, resize_image, or print_ready by focusing on product visualization rather than editing or file preparation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context ('Instant product visualization for e-commerce and print-on-demand') which signals when to use, but lacks explicit alternatives or exclusions. Does not clarify when to choose this over siblings like print_ready or get_asset for product images.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_hashA
Idempotent
Inspect

Register 256-bit perceptual hash with LSH band indexing for strip-proof provenance. ($0.10 / 1 GCX)

ParametersJSON Schema
NameRequiredDescriptionDefault
imageYesBase64-encoded PNG/JPEG image
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish the operational profile (idempotentHint=true, destructiveHint=false, readOnlyHint=false). The description adds crucial behavioral context not in annotations: explicit cost ('$0.10 / 1 GCX'), technical indexing method ('LSH band'), and specific hash bit-depth ('256-bit'). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single dense sentence with zero waste. Front-loaded action ('Register'), followed by technical specifications, purpose ('strip-proof provenance'), and cost model. Every clause delivers distinct value (bit-depth, algorithm, use case, pricing).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (cryptographic hashing, LSH indexing) and lack of output schema, the description adequately covers the operation mechanism and cost. Could be improved by noting the relationship to verify_provenance or indicating the return value (hash ID), but sufficient for selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (image parameter fully documented as 'Base64-encoded PNG/JPEG'). The description does not add parameter semantics beyond the schema, but with complete schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verb ('Register'), precise technical resource ('256-bit perceptual hash'), implementation details ('LSH band indexing'), and domain ('strip-proof provenance'). The technical specificity clearly distinguishes this from sibling verify_provenance and other asset management tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'strip-proof provenance' implies the use case (copyright protection/image authentication), but there is no explicit guidance on when to use this versus verify_provenance or other provenance-related siblings. No prerequisites or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_walletA
Idempotent
Inspect

Register your wallet to get 10 FREE GCX credits ($1 value). New wallets only — enough to try upscale + enrich. Purchase more via GCX packs. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
walletYesYour EVM wallet address (0x...)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds significant value beyond annotations by detailing the reward structure (10 credits/$1 value), eligibility restriction, and specific downstream tools the credits enable (upscale, enrich). Accurately reflects the write operation implied by annotations (readOnlyHint=false).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information is front-loaded with action and value proposition, but is undermined by redundant 'FREE. (FREE)' at the end. Otherwise efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple single-parameter tool. Covers the essential behavioral contract (registration bonus, eligibility) without needing to elaborate on return values given the straightforward nature of the operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with 'Your EVM wallet address (0x...)' description. The description mentions 'your wallet' but adds no additional semantic detail about the parameter format or validation beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Register' with resource 'wallet' and specific outcome (10 GCX credits). The 'New wallets only' constraint helps scope usage, though it doesn't explicitly contrast with sibling tools like check_balance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states eligibility constraint ('New wallets only') and references intended use cases ('try upscale + enrich'). Mentions credit purchasing path for users needing more, providing clear context on when to invoke.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

remove_backgroundA
Read-onlyIdempotent
Inspect

Remove image background using AI (U2-Net). Returns RGBA PNG/WebP with transparent background. Perfect for product photos, portraits, and design assets. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
imageYesBase64-encoded PNG/JPEG image
output_formatNoOutput formatpng
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only safety (readOnlyHint: true). The description adds valuable behavioral context beyond annotations: specific output format details ('RGBA PNG/WebP'), the AI model architecture ('U2-Net'), and cost information ('FREE').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with core functionality but contains unnecessary redundancy ('FREE. (FREE)'). Otherwise, the sentence structure efficiently conveys technical details, use cases, and return format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately compensates by specifying the return format ('RGBA PNG/WebP'). It provides sufficient context for a 2-parameter image processing tool, including the specific AI model used.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents both parameters. The description reinforces the output format options but does not significantly expand parameter semantics beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the core action ('Remove image background using AI'), specifies the technology ('U2-Net'), and distinguishes from sibling image tools by listing specific use cases ('product photos, portraits, and design assets').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description provides appropriate use cases ('Perfect for...'), it lacks explicit when-not-to-use guidance or comparisons to sibling alternatives like vectorize_image or upscale_image.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resize_imageA
Read-onlyIdempotent
Inspect

Resize an image to target dimensions. Supports fit modes: 'cover' (crop to fill), 'contain' (fit within, letterbox), 'stretch' (exact size). Useful for preparing images for specific platforms, thumbnails, or social media. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoResize mode: 'contain' (fit within bounds, preserve aspect ratio), 'cover' (crop to fill), 'stretch' (exact size, may distort)contain
imageYesBase64-encoded PNG/JPEG image
widthYesTarget width in pixels (1-8192)
formatNoOutput formatpng
heightYesTarget height in pixels (1-8192)
qualityNoJPEG/WebP quality (1-100)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint and idempotentHint, the description adds valuable behavioral context explaining exactly what each fit mode does ('crop to fill', 'fit within, letterbox', 'exact size'), helping agents understand the visual outcome before invoking.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The first sentence front-loads the core action effectively. However, the parenthetical '(FREE). (FREE)' appears to be repetitive metadata leakage or noise that reduces clarity. Otherwise, the sentence structure is efficient with no excessive verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and comprehensive annotations, the description successfully covers use cases and behavioral traits (fit modes). While it omits output format details (returned base64), no output schema is provided, and the input schema being base64 implies the output pattern.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. The description lists the fit modes and their behaviors, but this largely duplicates the detailed descriptions already present in the schema properties (e.g., mode enum descriptions). It adds minimal semantic value beyond the structured schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Resize') plus resource ('image') and scope ('target dimensions'). It clearly defines the three fit modes ('cover', 'contain', 'stretch') with parenthetical explanations, distinguishing this from sibling tools like upscale_image or vectorize_image which perform different transformations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context ('Useful for preparing images for specific platforms, thumbnails, or social media') but lacks explicit guidance on when to use this tool versus alternatives like upscale_image or when to choose specific fit modes over others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_assetA
Idempotent
Inspect

Save an image or data to your personal wallet storage. 100MB free per wallet, 500 assets max. ($0.10 / 1 GCX)

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesUnique name for this asset (e.g., 'my-landscape', 'pipeline-001')
dataYesBase64-encoded data (image, JSON, etc.) — max 10MB
walletYesYour EVM wallet address (0x...)
metadataNoOptional metadata JSON to store alongside
content_typeNoMIME typeimage/png
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Strong addition beyond annotations: discloses cost model ('$0.10 / 1 GCX') and storage quotas (100MB/500 assets) critical for invocation decisions. Annotations already establish idempotentHint=true and write permissions, so description adds economic and capacity constraints not present in structured fields. Could improve by clarifying overwrite behavior when key exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences each carry distinct weight: purpose statement, capacity limits, and pricing. Front-loaded with the core action, zero redundancy, no filler phrases. Excellent information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a write operation with idempotentHint=true and no output schema, the description adequately covers economic costs, storage boundaries, and resource type. Missing only explicit return value documentation or success/failure semantics, but these are less required when output schema is absent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds minimal semantic value beyond schema—'image or data' aligns with the data parameter description and 'personal wallet storage' contextualizes the wallet parameter, but doesn't elaborate on the metadata nested object or content_type defaults beyond what's documented in properties.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states specific action ('Save') and resource ('image or data') with clear destination ('personal wallet storage'). The scope is well-defined and naturally distinguishes from siblings like get_asset, delete_asset, and list_assets through precise verb choice.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage context through storage limits ('100MB free per wallet, 500 assets max') which hints at capacity constraints, but lacks explicit guidance on when to use versus alternatives like batch_download or prerequisites such as verifying wallet registration first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_artworksB
Read-onlyIdempotent
Inspect

Search 53K+ museum artworks from Alexandria Aeternum (MET, Chicago, NGA, Rijksmuseum, Smithsonian, Cleveland, Paris). FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-100)
queryYesSearch query (e.g. 'impressionist landscape', 'Monet', 'Dutch Golden Age')
museumNoFilter by museum (met, chicago, nga, rijks, smithsonian, cleveland, paris)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent/non-destructive status. Description adds valuable corpus context (53K+ records, museum sources) and cost info ('FREE'), but lacks details on response format, pagination behavior, or search relevance scoring.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with core info front-loaded, but contains redundant '(FREE)' repetition that adds no value. Otherwise efficient, though the parenthetical museum list is dense but necessary for scope clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Good coverage for a search tool: identifies dataset size (53K+), provenance (specific museums), and cost. No output schema exists, but description sufficiently establishes the search domain. Missing only the return value description (metadata fields, image URLs, etc.).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear examples (e.g., 'Monet', 'Dutch Golden Age') and valid museum keys. Description mentions no parameters, but with complete schema documentation, baseline 3 is appropriate—description doesn't need to repeat parameter docs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Search' + resource '53K+ museum artworks' + scope (7 specific museums listed). The description clearly identifies the Alexandria Aeternum corpus and distinguishes this from sibling tools like get_artwork (single fetch) by emphasizing the search/bulk nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use search_artworks vs get_artwork or get_artwork_oracle. No mention of query syntax requirements or when filtering by museum is recommended. The 'FREE' repetition hints at cost but doesn't explain rate limits or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_toolsA
Read-onlyIdempotent
Inspect

Discover available tools by category or price without loading all schemas. Start here to save tokens. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoSearch query to filter tools by name or description
categoryNoFilter by categoryall
max_price_usdNoMax price per call in USD (0 = free only)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, non-destructive, idempotent). Description adds valuable behavioral context: 'without loading all schemas' explains efficiency benefits, 'save tokens' warns about token costs of alternatives, and '(FREE)' discloses pricing - none of which are in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. First sentence establishes purpose and method, second gives usage priority ('Start here'), third gives cost signal ('FREE'). Front-loaded with the most critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters (all optional), 100% schema coverage, and strong annotations declaring safety hints, the description sufficiently covers purpose, usage timing, and cost model. No output schema exists but description doesn't need to elaborate return values for a standard discovery tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (query, category, max_price_usd all well documented). Description adds no explicit parameter semantics, but baseline score is 3 when schema coverage exceeds 80% and carries the semantic load adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states 'Discover available tools by category or price' - specific verb (discover) + specific resource (tools) + scoping (by category/price). The phrase 'without loading all schemas' clearly distinguishes from sibling 'get_tool_schema', while 'tools' differentiates from 'search_artworks'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear guidance via 'Start here to save tokens' indicating it should be used as an entry point before expensive operations. Mentions 'without loading all schemas' implying contrast with full schema loading alternatives, though it doesn't explicitly name the alternative (get_tool_schema).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upscale_imageA
Idempotent
Inspect

Super-resolution using Real-ESRGAN on NVIDIA L4 GPU. 5 models for different content types. Default: 2x general upscale. ($0.20 / 2 GCX)

ParametersJSON Schema
NameRequiredDescriptionDefault
imageYesBase64-encoded PNG/JPEG image
modelNoESRGAN model to use. Options: 'realesrgan_x2plus' (2x, general — default), 'realesrgan_x4plus' (4x, general/photo), 'realesrgan_x4plus_anime' (4x, anime/illustrations), 'realesr_general_x4v3' (4x, fast general), 'realesr_animevideov3' (4x, anime video frames).realesrgan_x2plus
scaleNoShorthand: 2 selects x2plus, 4 selects x4plus. Ignored if model is specified directly.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish idempotency and non-destructiveness. The description adds critical behavioral context not in annotations: specific hardware constraints (NVIDIA L4), cost implications ($0.20 / 2 GCX), and the availability of content-specific models. It slightly misses explaining output format or what occurs on GPU failure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of four information-dense fragments covering algorithm, hardware flexibility, default behavior, and pricing. Every clause earns its place; no redundancy with the structured schema or annotations. The pricing parenthetical is appropriately terse for cost-critical infrastructure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Considering the high complexity (GPU compute, five ML models, costing) and lack of output schema, the description adequately covers the critical domain-specific context (cost, hardware, model specialization). However, it omits the return value format/output behavior, which should be described when no output schema is provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (detailed model enum descriptions, scale logic), the baseline is 3. The description adds synthesis by grouping the five models under 'different content types' and reinforcing the 2x default, but does not add new parameter constraints, formats, or validation rules beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific operation (Super-resolution), the exact algorithm/library (Real-ESRGAN), and the hardware (NVIDIA L4 GPU). This provides a precise technical scope that distinguishes it from sibling resize_image by specifying ML-based upscaling versus traditional interpolation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides implicit guidance through '5 models for different content types' and the default configuration (2x general), which helps users select appropriate models. However, it lacks explicit direction on when to use this costly GPU-intensive tool versus the simpler resize_image sibling, or cost-based decision criteria despite listing the price.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vectorize_imageA
Read-onlyIdempotent
Inspect

Convert raster images to SVG vector format. Supports color and binary modes with precision controls. Returns raw SVG XML string. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoVectorization modecolor
imageYesBase64-encoded PNG/JPEG image
filter_speckleNoSpeckle filter (0-100, higher = fewer small artifacts)
color_precisionNoColor clustering precision (1-10, higher = more colors)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds critical output format disclosure ('Returns raw SVG XML string') and explains the precision control concept. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with clear purpose statement. Three substantive sentences efficiently cover functionality, parameters, and return value. Minor redundancy with 'FREE. (FREE)' at the end, otherwise well-structured with no waste in core sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates for missing output schema by explicitly stating return format (SVG XML string). Combined with rich annotations (readOnly, idempotent) and complete parameter documentation, the description provides sufficient context for a 4-parameter conversion tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds semantic value by grouping filter_speckle and color_precision as 'precision controls' and contextualizing the mode parameter as 'color and binary modes', providing conceptual scaffolding beyond individual parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (convert), resource (raster images), and target format (SVG vector) clearly stated. Implicitly distinguishes from sibling raster operations like upscale_image or remove_background by specifying vector output format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes supported modes (color/binary) and precision controls but provides no explicit guidance on when to use this tool versus alternatives like upscale_image, or when to choose binary vs color mode. No prerequisites or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_provenanceB
Read-onlyIdempotent
Inspect

Strip-proof provenance verification via Aegis hash index. FREE - no payment required. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
imageYesBase64-encoded PNG/JPEG image
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds implementation context ('Aegis hash index', 'strip-proof') and cost structure, but does not disclose return value structure, rate limits, or verification failure modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief but contains redundancy: 'FREE', 'no payment required', and '(FREE)' convey identical billing information in close succession, violating the 'every sentence earns its place' principle despite being front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has low complexity (1 parameter) and rich annotations covering behavioral hints. However, with no output schema provided, the description omits what verification returns (boolean, object, confidence score?), leaving a gap for an agent needing to handle responses.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, documenting the Base64 image requirement. The description adds no additional parameter semantics (e.g., size limits, supported sub-formats), warranting the baseline score of 3 per rubric.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('verification') and resource ('provenance') with technical specificity ('Aegis hash index', 'strip-proof'). However, it does not explicitly differentiate from the sibling tool `register_hash` or clarify when to verify versus register.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description contains billing constraint information ('FREE') but provides no guidance on when to select this tool versus alternatives like `register_hash` or `get_artwork`, nor does it mention prerequisites or when not to use the tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

watermark_detectA
Read-onlyIdempotent
Inspect

Detect and extract invisible DCT watermark from an image. Returns the embedded text payload if found. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
imageYesBase64-encoded PNG/JPEG image
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnly/destructive hints), the description adds valuable behavioral context: the algorithm type ('DCT'), the return value format ('embedded text payload if found'), and cost information ('FREE'). It does not describe error cases or what returns when no watermark exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with the core action and efficiently sized for a single-parameter tool. Minor deduction for the redundant '(FREE)' repetition at the end.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read operation with good annotations, the description adequately compensates for the missing output schema by specifying the return payload format ('embedded text payload') and includes cost information, leaving no significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents the 'image' parameter (Base64-encoded PNG/JPEG). The description mentions 'from an image' but adds no semantic details beyond what the schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides specific verbs ('Detect and extract'), specifies the resource ('invisible DCT watermark'), and distinguishes from sibling 'watermark_embed' by clearly stating this is for extraction/reading rather than embedding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the specific action ('detect') implies usage context contrasting with 'watermark_embed', there is no explicit guidance on when to use this versus alternatives, prerequisites, or conditions for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

watermark_embedC
Read-onlyIdempotent
Inspect

Embed invisible DCT-domain watermark into an image. Encodes a text payload into luminance channel frequency coefficients. Survives light compression. FREE. (FREE)

ParametersJSON Schema
NameRequiredDescriptionDefault
imageYesBase64-encoded PNG/JPEG image
payloadYesText payload to embed (max 256 chars)
strengthNoEmbedding strength (0.1-1.0, higher = more robust but more visible)
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description claims the tool performs an 'Embed' operation (writing/encoding data into an image), which contradicts the annotation readOnlyHint=true. Embedding inherently modifies image data, making readOnlyHint semantically incorrect for this operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is mostly concise and front-loaded with the core action, but contains redundant noise ('FREE. (FREE)') that wastes space and confuses the signal without clarifying whether this refers to cost or technical constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately explains the technical method (DCT embedding) and robustness characteristics, but lacks explanation of the output format (since no output schema exists) and fails to resolve the tension between the embedding operation and the read-only annotation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the description meets the baseline. It adds contextual relevance by mentioning 'DCT-domain' and 'luminance channel,' which clarify the embedding technique, but does not elaborate on parameter syntax beyond the schema's 'maxLength' and range definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool embeds an invisible DCT-domain watermark with technical specificity (luminance channel frequency coefficients), providing specific verb and resource. However, it lacks explicit differentiation from the sibling 'watermark_detect' tool regarding whether this creates a new image or modifies in place.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes the watermark 'survives light compression,' implying usage constraints, but provides no explicit guidance on when to use this versus 'watermark_detect' or other image protection methods like visible watermarking.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.