Skip to main content
Glama

Spritesheet Forge

Server Details

Game-dev sprite tools: PNG/GIF to spritesheet, split, trim, animate. OAuth-authenticated MCP server.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct operation (e.g., frames_to_animation vs gif_to_frames, png_to_spritesheet vs split_spritesheet). No overlap in purpose, making selection unambiguous.

Naming Consistency5/5

All tool names follow a consistent lowercase underscore pattern with clear verb_noun structure (e.g., frames_to_animation, split_spritesheet, trim_png). No mixed conventions or vague names.

Tool Count5/5

With 8 tools, the server is well-scoped for its domain of spritesheet and animation manipulation. Each tool serves a core function without redundancy or excessive specialization.

Completeness5/5

The tool surface covers the full lifecycle: creating spritesheets (png_to_spritesheet, gif_to_spritesheet), extracting/slicing (split_spritesheet, gif_to_frames), converting to animation (frames_to_animation, spritesheet_to_animation), trimming (trim_png), and server info (server_info). No obvious gaps.

Available Tools

8 tools
frames_to_animationAInspect

Assemble multiple PNG files into an animated GIF or animated WebP.

ParametersJSON Schema
NameRequiredDescriptionDefault
loopNoLoop count. 0 = infinite. Default: 0
filesYesPNG frames — HTTPS URLs, data URIs, or output URLs from previous tool calls (pass directly, no re-encoding needed). For local files < 4 MB each: base64-encode the bytes and prepend "data:image/png;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending. For files ≥ 4 MB each: call server_info to get the upload_url, POST the file there (multipart/form-data, field "file", Bearer token), and pass the returned URL.
resizeNoDimension mismatch handling. Default: transparent
qualityNoWebP lossy quality 0-100. Default: 80
durationNoFrame duration in ms (10-10000). Default: 100
losslessNoWebP lossless mode. Default: false
bg_fill_colorNoFill color for resize=fill. Hex #RRGGBB. Default: #000000
output_formatNoOutput format. Default: gif
file_name_orderNoSort by _N filename suffix. Default: false

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlYesDownload URL for the output file (expires in 1 hour)
quotaYes
expires_atYesISO 8601 expiry timestamp
size_bytesYesOutput file size in bytes
content_typeYesMIME type of the output file (image/png, image/gif, application/zip, etc.)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not disclose behavioral traits beyond what annotations and parameter descriptions already provide. It does not contradict annotations. With rich parameter descriptions (100% coverage), the tool's behavior is adequately described, but the main description adds little context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single clear sentence with no waste. It is front-loaded and efficient, earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the moderate complexity (9 params, 1 required) and rich schema coverage, the description is adequate but lacks usage guidelines and behavioral context. It could mention default format or when to use this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not add meaning beyond the input schema, which already has 100% parameter description coverage. The baseline of 3 is appropriate as the description offers no additional param context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Assemble multiple PNG files into an animated GIF or animated WebP.' It specifies the action (assemble), input (multiple PNG files), and output (animated GIF/WebP), distinguishing it from siblings that handle other formats or spritesheets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating what the tool does, but does not explicitly guide when to use it versus alternatives like spritesheet_to_animation or gif_to_frames. No when-not or when-to-use conditions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gif_to_framesAInspect

Extract all frames from a GIF and return them as individual PNGs in a ZIP archive.

ParametersJSON Schema
NameRequiredDescriptionDefault
fileYesGIF file — HTTPS URL, data URI, or output URL from a previous tool call (pass directly, no re-encoding needed). For local files < ~185 KB: base64-encode the bytes and prepend "data:image/gif;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending (shell encoders like openssl insert newlines that cause INVALID_BASE64). For larger files or any file encoded via a shell command: call server_info to get the upload_url and token instructions, POST the file there (multipart/form-data, field "file", Bearer token required), and pass the returned URL.
bg_colorNo"auto" or hex "#RRGGBB"
remove_bgNoRemove background from each frame. Default: false
toleranceNoBackground removal threshold 0-255. Default: 30

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlYesDownload URL for the output file (expires in 1 hour)
quotaYes
expires_atYesISO 8601 expiry timestamp
size_bytesYesOutput file size in bytes
content_typeYesMIME type of the output file (image/png, image/gif, application/zip, etc.)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate the tool is not read-only, not idempotent, and not destructive. The description adds that it returns a ZIP archive, but does not disclose potential side effects like network access for file URLs, which is hinted by 'openWorldHint=true'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that conveys the core functionality without any unnecessary words. It is front-loaded with the key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the parameter descriptions are detailed and an output schema exists, the short description is sufficient to cover the main functionality. However, it does not explain prerequisites or optional parameters, relying on the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The tool description does not add any parameter-specific meaning beyond the schema; all parameter details are in the input schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('extract all frames'), the input ('a GIF'), and the output ('individual PNGs in a ZIP archive'). It distinguishes the tool from siblings like 'gif_to_spritesheet' by specifying the output format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage ('use when you need frames from a GIF'), but it does not explicitly state when to use this tool versus alternatives like 'gif_to_spritesheet' or 'frames_to_animation'. No exclusions or conditions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gif_to_spritesheetAInspect

Convert a GIF animation into a spritesheet PNG with all frames arranged in a grid. Optionally remove the background.

ParametersJSON Schema
NameRequiredDescriptionDefault
fileYesGIF file — HTTPS URL, data URI, or output URL from a previous tool call (pass directly, no re-encoding needed). For local files < ~185 KB: base64-encode the bytes and prepend "data:image/gif;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending (shell encoders like openssl insert newlines that cause INVALID_BASE64). For larger files or any file encoded via a shell command: call server_info to get the upload_url and token instructions, POST the file there (multipart/form-data, field "file", Bearer token required), and pass the returned URL.
columnsNoGrid columns. Auto-calculated if omitted.
paddingNoPixel gap between frames. Default: 0
bg_colorNo"auto" or hex "#RRGGBB". Default: "auto"
remove_bgNoRemove background from each frame. Default: false
toleranceNoBackground removal threshold 0-255. Default: 30

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlYesDownload URL for the output file (expires in 1 hour)
quotaYes
expires_atYesISO 8601 expiry timestamp
size_bytesYesOutput file size in bytes
content_typeYesMIME type of the output file (image/png, image/gif, application/zip, etc.)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate a write operation (readOnlyHint=false) with no destruction. The description adds significant behavioral context beyond annotations: detailed file input handling (HTTPS, data URI, local base64 with whitespace stripping, upload via server_info). It also clarifies the optional background removal behavior. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the main purpose and key features. The file input handling adds length but provides necessary detail. One or two sentences could be trimmed, but overall it efficiently communicates the core function and critical parameter guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool complexity (6 params, output schema), the description covers the core conversion, optional background removal, and detailed file input handling. It does not explain the output schema or limitations (e.g., max file size), but the presence of a separate output schema shifts the burden. The provided details are sufficient for most use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all 6 parameters. The description adds significant value for the 'file' parameter by detailing multiple input formats and encoding requirements (e.g., stripping whitespace from base64). For other parameters, it largely restates schema descriptions, but the file details justify a score above baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool converts a GIF to a spritesheet PNG in a grid, with optional background removal. This clearly distinguishes it from siblings like gif_to_frames (extracts frames) or png_to_spritesheet (converts PNG sequences). The verb 'Convert' and resource 'GIF animation into a spritesheet' are specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives. While sibling tool names provide some context, there is no direct guidance on scenarios where this tool is preferred or not. The omission of usage context leaves room for ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

png_to_spritesheetAInspect

Merge multiple PNG files into a single spritesheet. Supports grid, horizontal, vertical, and packed (bin-packed) layouts with optional TexturePacker-compatible JSON metadata. Returns a download URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
alignNo
filesYesPNG files — HTTPS URLs, data URIs, or output URLs from previous tool calls (pass directly, no re-encoding needed). For local files < ~185 KB each: base64-encode the bytes and prepend "data:image/png;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending (shell encoders like openssl insert newlines that cause INVALID_BASE64). For larger files or any file encoded via a shell command: call server_info to get the upload_url and token instructions, POST the file there (multipart/form-data, field "file", Bearer token required), and pass the returned URL.
layoutNoFrame arrangement. Default: grid
columnsNoGrid columns. Auto-calculated if omitted.
extrudeNoExtrude outermost pixels by N px per frame
paddingNoPixel gap between frames
bg_colorNo"transparent" or hex "#RRGGBB"
fit_modeNo
cell_modeNoCell sizing mode. Default: auto_max
cell_widthNoRequired when cell_mode=fixed
power_of_2NoPad output to next power of 2
trim_inputNoAuto-trim transparent edges before compositing
cell_heightNoRequired when cell_mode=fixed
file_name_orderNoSort by _N filename suffix
metadata_formatNoAtlas metadata format. Required (non-none) when layout=packed

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlYesDownload URL for the output file (expires in 1 hour)
quotaYes
expires_atYesISO 8601 expiry timestamp
size_bytesYesOutput file size in bytes
content_typeYesMIME type of the output file (image/png, image/gif, application/zip, etc.)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide basic hints. Description adds behavioral details like layout options, metadata format, and download URL return, enhancing transparency beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is three concise sentences with no waste. Purpose is front-loaded. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (15 parameters, 1 required, output schema exists), the description covers key behaviors: layouts, metadata, and output type. Could mention more about parameter varieties but isn't necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 87%, so baseline is 3. Description does not add much beyond summarizing overall functionality; individual parameters are well-documented in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Merge multiple PNG files into a single spritesheet' with specific verb and resource, and lists supported layouts. It distinguishes from siblings like gif_to_spritesheet by specifying PNG files.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for PNG-to-spritesheet conversion but lacks explicit guidance on when not to use or alternatives. Context is clear from the title and sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

server_infoA
Read-onlyIdempotent
Inspect

Returns this server's runtime configuration: upload endpoint URL, output file TTL, file size limits, and base64 encoding rules. Call this before working with large files (≥ 4 MB) or when building multi-step workflows that chain tool outputs.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
upload_urlYesURL for uploading files via multipart/form-data (Bearer token required)
max_file_bytesYesMaximum accepted file size in bytes
file_input_rulesYesGuidance for agents on how to pass file inputs
output_ttl_secondsYesSeconds until output files expire
base64_threshold_bytesYesFiles smaller than this can be sent as base64 data URIs
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly=true and idempotent=true, so the tool is clearly safe. Description adds specific details about what configuration is returned, enriching beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundant words. First sentence states purpose, second provides actionable usage advice. Efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no parameters and an output schema present, description fully covers tool's purpose and usage context. No missing information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist (input schema empty, 100% coverage). Description correctly omits param details; no additional meaning needed. Baseline 4 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states 'returns this server's runtime configuration' and lists specific items (upload endpoint URL, TTL, limits, encoding rules). Clearly distinguishes from sibling tools which are all image processing tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear guidance: 'Call this before working with large files (≥ 4 MB) or when building multi-step workflows.' While not exhaustive, it gives practical usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

split_spritesheetAInspect

Slice a spritesheet PNG into individual frames, generate TexturePacker-compatible atlas JSON, or both. Provide columns+rows (grid mode) or cell_width+cell_height (cell mode).

ParametersJSON Schema
NameRequiredDescriptionDefault
fileYesSpritesheet PNG — HTTPS URL, data URI, or output URL from a previous tool call (pass directly, no re-encoding needed). For local files < 4 MB: base64-encode the bytes and prepend "data:image/png;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending. For files ≥ 4 MB: call server_info to get the upload_url, POST the file there (multipart/form-data, field "file", Bearer token), and pass the returned URL.
rowsNoGrid rows (grid mode)
outputNoDefault: frames
columnsNoGrid columns (grid mode)
paddingNo
trim_topNo
row_rangeNo
trim_leftNo
cell_widthNoCell width in px (cell mode)
skip_emptyNoRemove fully transparent frames. Default: true
trim_rightNo
cell_heightNoCell height in px (cell mode)
frame_countNo
trim_bottomNo
column_rangeNoe.g. "0-5" or "2"
metadata_formatNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlYesDownload URL for the output file (expires in 1 hour)
quotaYes
expires_atYesISO 8601 expiry timestamp
size_bytesYesOutput file size in bytes
content_typeYesMIME type of the output file (image/png, image/gif, application/zip, etc.)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates the tool produces output but does not overly elaborate side effects. Annotations show readOnlyHint=false and destructiveHint=false, implying mutation but no destruction; description adds context about producing frames and JSON. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler, front-loaded with main verb 'Slice' and output. Each sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 16 parameters and output schema exist, the description is brief and omits details like default values (e.g., skip_empty=true, output default 'frames'). It could provide more context on parameter interactions or expected output structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%; the description only mentions the two parameter groups (grid vs cell mode), not the many undocumented optional parameters like padding, trim, or frame_count. The schema provides some descriptions, but the description does not compensate for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool slices a spritesheet into frames and/or generates atlas JSON, and distinguishes between grid and cell modes. It is specific and differentiates from sibling tools like gif_to_spritesheet or trim_png.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use grid mode (columns+rows) vs cell mode (cell_width+cell_height), giving clear context. However, it does not explicitly exclude use with GIFs or other formats, nor does it mention sibling tools for alternative tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

spritesheet_to_animationAInspect

Slice a spritesheet PNG into frames and produce an animated GIF or WebP. Provide columns+rows (grid mode) or cell_width+cell_height (cell mode).

ParametersJSON Schema
NameRequiredDescriptionDefault
fileYesSpritesheet PNG — HTTPS URL, data URI, or output URL from a previous tool call (pass directly, no re-encoding needed). For local files < ~185 KB: base64-encode the bytes and prepend "data:image/png;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending (shell encoders like openssl insert newlines that cause INVALID_BASE64). For larger files or any file encoded via a shell command: call server_info to get the upload_url and token instructions, POST the file there (multipart/form-data, field "file", Bearer token required), and pass the returned URL.
loopNoLoop count. 0 = infinite. Default: 0
rowsNoGrid rows (grid mode)
columnsNoGrid columns (grid mode)
paddingNoPixel gap between cells. Default: 0
qualityNoWebP quality 0-100. Default: 80
durationNoFrame duration in ms. Default: 100
losslessNoWebP lossless. Default: false
trim_topNo
row_rangeNo
trim_leftNo
cell_widthNoCell width in px (cell mode)
skip_emptyNoAuto-remove fully transparent frames. Default: true
trim_rightNo
cell_heightNoCell height in px (cell mode)
frame_countNoActual frame count for incomplete last row
trim_bottomNo
column_rangeNoe.g. "0-5" or "2"
output_formatNoDefault: gif

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlYesDownload URL for the output file (expires in 1 hour)
quotaYes
expires_atYesISO 8601 expiry timestamp
size_bytesYesOutput file size in bytes
content_typeYesMIME type of the output file (image/png, image/gif, application/zip, etc.)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate non-read-only and non-destructive behavior. The description adds minimal behavioral context beyond the core operation. It does not cover upload requirements or side effects, but the file parameter description in the schema handles upload instructions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. First sentence states purpose and output format, second explains the two parameter modes. Perfectly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (19 parameters) and the presence of a detailed output schema, the description adequately covers the essential decision (grid vs cell mode). It does not discuss return values or error scenarios, but the output schema and parameter descriptions fill those gaps sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 74%, so the schema already documents most parameters. The description adds value by grouping columns+rows as grid mode and cell_width+cell_height as cell mode, which aids understanding. However, it does not elaborate on parameters like trim_top or row_range, so it only slightly enhances the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (slice spritesheet into frames and produce animation), the input (spritesheet PNG), and the output (animated GIF or WebP). It also specifies two modes (grid vs cell) with required parameters, effectively distinguishing this tool from siblings like split_spritesheet or gif_to_frames.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have a spritesheet and want an animation, offering clear mode alternatives. However, it does not explicitly state when not to use this tool or reference sibling tools, which would help the agent decide between alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trim_pngAInspect

Crop transparent edges from one or more PNG files. Single file returns PNG; multiple files return a ZIP.

ParametersJSON Schema
NameRequiredDescriptionDefault
filesYesPNG files — HTTPS URLs, data URIs, or output URLs from previous tool calls (pass directly, no re-encoding needed). For local files < ~185 KB each: base64-encode the bytes and prepend "data:image/png;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending (shell encoders like openssl insert newlines that cause INVALID_BASE64). For larger files or any file encoded via a shell command: call server_info to get the upload_url and token instructions, POST the file there (multipart/form-data, field "file", Bearer token required), and pass the returned URL.
paddingNoTransparent margin to preserve around trimmed content. Default: 0
thresholdNoAlpha threshold 0-255. Pixels with alpha ≤ threshold are trimmed. Default: 0

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlYesDownload URL for the output file (expires in 1 hour)
quotaYes
expires_atYesISO 8601 expiry timestamp
size_bytesYesOutput file size in bytes
content_typeYesMIME type of the output file (image/png, image/gif, application/zip, etc.)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses output format behavior (single vs multiple files returns PNG or ZIP). Adds details on padding and threshold parameters beyond schema. No contradiction with annotations; non-destructive behavior implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single paragraph but front-loaded with purpose. All sentences earn their place; however, the encoding instructions could be structured (e.g., bullet points) for easier parsing. Still concise enough given the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers all parameters, output format, edge cases (base64 encoding errors), and cross-tool dependency (server_info for uploads). Output schema exists, so return values are already documented. Complete for a tool with multiple input methods.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Adds significant value beyond schema: detailed instructions for `files` parameter including base64 whitespace warnings and server upload procedure. Explains `padding` and `threshold` defaults and meanings. Full schema coverage (100%) is supplemented with practical usage tips.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool crops transparent edges from PNG files and specifies output format (single PNG vs ZIP for multiple). Distinguishes from siblings which deal with animations or spritesheets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides comprehensive guidance on file input methods (URLs, data URIs, local files) with specific encoding and upload instructions. Does not explicitly exclude use with non-PNG files or compare to sibling tools, but context implies it's for static PNG trimming.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources