Spritesheet Forge
Server Details
Game-dev sprite tools: PNG/GIF to spritesheet, split, trim, animate. OAuth-authenticated MCP server.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 8 of 8 tools scored.
Each tool targets a distinct operation (e.g., frames_to_animation vs gif_to_frames, png_to_spritesheet vs split_spritesheet). No overlap in purpose, making selection unambiguous.
All tool names follow a consistent lowercase underscore pattern with clear verb_noun structure (e.g., frames_to_animation, split_spritesheet, trim_png). No mixed conventions or vague names.
With 8 tools, the server is well-scoped for its domain of spritesheet and animation manipulation. Each tool serves a core function without redundancy or excessive specialization.
The tool surface covers the full lifecycle: creating spritesheets (png_to_spritesheet, gif_to_spritesheet), extracting/slicing (split_spritesheet, gif_to_frames), converting to animation (frames_to_animation, spritesheet_to_animation), trimming (trim_png), and server info (server_info). No obvious gaps.
Available Tools
8 toolsframes_to_animationAInspect
Assemble multiple PNG files into an animated GIF or animated WebP.
| Name | Required | Description | Default |
|---|---|---|---|
| loop | No | Loop count. 0 = infinite. Default: 0 | |
| files | Yes | PNG frames — HTTPS URLs, data URIs, or output URLs from previous tool calls (pass directly, no re-encoding needed). For local files < 4 MB each: base64-encode the bytes and prepend "data:image/png;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending. For files ≥ 4 MB each: call server_info to get the upload_url, POST the file there (multipart/form-data, field "file", Bearer token), and pass the returned URL. | |
| resize | No | Dimension mismatch handling. Default: transparent | |
| quality | No | WebP lossy quality 0-100. Default: 80 | |
| duration | No | Frame duration in ms (10-10000). Default: 100 | |
| lossless | No | WebP lossless mode. Default: false | |
| bg_fill_color | No | Fill color for resize=fill. Hex #RRGGBB. Default: #000000 | |
| output_format | No | Output format. Default: gif | |
| file_name_order | No | Sort by _N filename suffix. Default: false |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | Yes | Download URL for the output file (expires in 1 hour) |
| quota | Yes | |
| expires_at | Yes | ISO 8601 expiry timestamp |
| size_bytes | Yes | Output file size in bytes |
| content_type | Yes | MIME type of the output file (image/png, image/gif, application/zip, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not disclose behavioral traits beyond what annotations and parameter descriptions already provide. It does not contradict annotations. With rich parameter descriptions (100% coverage), the tool's behavior is adequately described, but the main description adds little context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single clear sentence with no waste. It is front-loaded and efficient, earning its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the moderate complexity (9 params, 1 required) and rich schema coverage, the description is adequate but lacks usage guidelines and behavioral context. It could mention default format or when to use this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not add meaning beyond the input schema, which already has 100% parameter description coverage. The baseline of 3 is appropriate as the description offers no additional param context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Assemble multiple PNG files into an animated GIF or animated WebP.' It specifies the action (assemble), input (multiple PNG files), and output (animated GIF/WebP), distinguishing it from siblings that handle other formats or spritesheets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating what the tool does, but does not explicitly guide when to use it versus alternatives like spritesheet_to_animation or gif_to_frames. No when-not or when-to-use conditions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gif_to_framesAInspect
Extract all frames from a GIF and return them as individual PNGs in a ZIP archive.
| Name | Required | Description | Default |
|---|---|---|---|
| file | Yes | GIF file — HTTPS URL, data URI, or output URL from a previous tool call (pass directly, no re-encoding needed). For local files < ~185 KB: base64-encode the bytes and prepend "data:image/gif;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending (shell encoders like openssl insert newlines that cause INVALID_BASE64). For larger files or any file encoded via a shell command: call server_info to get the upload_url and token instructions, POST the file there (multipart/form-data, field "file", Bearer token required), and pass the returned URL. | |
| bg_color | No | "auto" or hex "#RRGGBB" | |
| remove_bg | No | Remove background from each frame. Default: false | |
| tolerance | No | Background removal threshold 0-255. Default: 30 |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | Yes | Download URL for the output file (expires in 1 hour) |
| quota | Yes | |
| expires_at | Yes | ISO 8601 expiry timestamp |
| size_bytes | Yes | Output file size in bytes |
| content_type | Yes | MIME type of the output file (image/png, image/gif, application/zip, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate the tool is not read-only, not idempotent, and not destructive. The description adds that it returns a ZIP archive, but does not disclose potential side effects like network access for file URLs, which is hinted by 'openWorldHint=true'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that conveys the core functionality without any unnecessary words. It is front-loaded with the key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the parameter descriptions are detailed and an output schema exists, the short description is sufficient to cover the main functionality. However, it does not explain prerequisites or optional parameters, relying on the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The tool description does not add any parameter-specific meaning beyond the schema; all parameter details are in the input schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('extract all frames'), the input ('a GIF'), and the output ('individual PNGs in a ZIP archive'). It distinguishes the tool from siblings like 'gif_to_spritesheet' by specifying the output format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage ('use when you need frames from a GIF'), but it does not explicitly state when to use this tool versus alternatives like 'gif_to_spritesheet' or 'frames_to_animation'. No exclusions or conditions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gif_to_spritesheetAInspect
Convert a GIF animation into a spritesheet PNG with all frames arranged in a grid. Optionally remove the background.
| Name | Required | Description | Default |
|---|---|---|---|
| file | Yes | GIF file — HTTPS URL, data URI, or output URL from a previous tool call (pass directly, no re-encoding needed). For local files < ~185 KB: base64-encode the bytes and prepend "data:image/gif;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending (shell encoders like openssl insert newlines that cause INVALID_BASE64). For larger files or any file encoded via a shell command: call server_info to get the upload_url and token instructions, POST the file there (multipart/form-data, field "file", Bearer token required), and pass the returned URL. | |
| columns | No | Grid columns. Auto-calculated if omitted. | |
| padding | No | Pixel gap between frames. Default: 0 | |
| bg_color | No | "auto" or hex "#RRGGBB". Default: "auto" | |
| remove_bg | No | Remove background from each frame. Default: false | |
| tolerance | No | Background removal threshold 0-255. Default: 30 |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | Yes | Download URL for the output file (expires in 1 hour) |
| quota | Yes | |
| expires_at | Yes | ISO 8601 expiry timestamp |
| size_bytes | Yes | Output file size in bytes |
| content_type | Yes | MIME type of the output file (image/png, image/gif, application/zip, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a write operation (readOnlyHint=false) with no destruction. The description adds significant behavioral context beyond annotations: detailed file input handling (HTTPS, data URI, local base64 with whitespace stripping, upload via server_info). It also clarifies the optional background removal behavior. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main purpose and key features. The file input handling adds length but provides necessary detail. One or two sentences could be trimmed, but overall it efficiently communicates the core function and critical parameter guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool complexity (6 params, output schema), the description covers the core conversion, optional background removal, and detailed file input handling. It does not explain the output schema or limitations (e.g., max file size), but the presence of a separate output schema shifts the burden. The provided details are sufficient for most use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all 6 parameters. The description adds significant value for the 'file' parameter by detailing multiple input formats and encoding requirements (e.g., stripping whitespace from base64). For other parameters, it largely restates schema descriptions, but the file details justify a score above baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool converts a GIF to a spritesheet PNG in a grid, with optional background removal. This clearly distinguishes it from siblings like gif_to_frames (extracts frames) or png_to_spritesheet (converts PNG sequences). The verb 'Convert' and resource 'GIF animation into a spritesheet' are specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool versus alternatives. While sibling tool names provide some context, there is no direct guidance on scenarios where this tool is preferred or not. The omission of usage context leaves room for ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
png_to_spritesheetAInspect
Merge multiple PNG files into a single spritesheet. Supports grid, horizontal, vertical, and packed (bin-packed) layouts with optional TexturePacker-compatible JSON metadata. Returns a download URL.
| Name | Required | Description | Default |
|---|---|---|---|
| align | No | ||
| files | Yes | PNG files — HTTPS URLs, data URIs, or output URLs from previous tool calls (pass directly, no re-encoding needed). For local files < ~185 KB each: base64-encode the bytes and prepend "data:image/png;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending (shell encoders like openssl insert newlines that cause INVALID_BASE64). For larger files or any file encoded via a shell command: call server_info to get the upload_url and token instructions, POST the file there (multipart/form-data, field "file", Bearer token required), and pass the returned URL. | |
| layout | No | Frame arrangement. Default: grid | |
| columns | No | Grid columns. Auto-calculated if omitted. | |
| extrude | No | Extrude outermost pixels by N px per frame | |
| padding | No | Pixel gap between frames | |
| bg_color | No | "transparent" or hex "#RRGGBB" | |
| fit_mode | No | ||
| cell_mode | No | Cell sizing mode. Default: auto_max | |
| cell_width | No | Required when cell_mode=fixed | |
| power_of_2 | No | Pad output to next power of 2 | |
| trim_input | No | Auto-trim transparent edges before compositing | |
| cell_height | No | Required when cell_mode=fixed | |
| file_name_order | No | Sort by _N filename suffix | |
| metadata_format | No | Atlas metadata format. Required (non-none) when layout=packed |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | Yes | Download URL for the output file (expires in 1 hour) |
| quota | Yes | |
| expires_at | Yes | ISO 8601 expiry timestamp |
| size_bytes | Yes | Output file size in bytes |
| content_type | Yes | MIME type of the output file (image/png, image/gif, application/zip, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide basic hints. Description adds behavioral details like layout options, metadata format, and download URL return, enhancing transparency beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is three concise sentences with no waste. Purpose is front-loaded. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (15 parameters, 1 required, output schema exists), the description covers key behaviors: layouts, metadata, and output type. Could mention more about parameter varieties but isn't necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 87%, so baseline is 3. Description does not add much beyond summarizing overall functionality; individual parameters are well-documented in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Merge multiple PNG files into a single spritesheet' with specific verb and resource, and lists supported layouts. It distinguishes from siblings like gif_to_spritesheet by specifying PNG files.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for PNG-to-spritesheet conversion but lacks explicit guidance on when not to use or alternatives. Context is clear from the title and sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
server_infoARead-onlyIdempotentInspect
Returns this server's runtime configuration: upload endpoint URL, output file TTL, file size limits, and base64 encoding rules. Call this before working with large files (≥ 4 MB) or when building multi-step workflows that chain tool outputs.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| upload_url | Yes | URL for uploading files via multipart/form-data (Bearer token required) |
| max_file_bytes | Yes | Maximum accepted file size in bytes |
| file_input_rules | Yes | Guidance for agents on how to pass file inputs |
| output_ttl_seconds | Yes | Seconds until output files expire |
| base64_threshold_bytes | Yes | Files smaller than this can be sent as base64 data URIs |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly=true and idempotent=true, so the tool is clearly safe. Description adds specific details about what configuration is returned, enriching beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundant words. First sentence states purpose, second provides actionable usage advice. Efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no parameters and an output schema present, description fully covers tool's purpose and usage context. No missing information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist (input schema empty, 100% coverage). Description correctly omits param details; no additional meaning needed. Baseline 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'returns this server's runtime configuration' and lists specific items (upload endpoint URL, TTL, limits, encoding rules). Clearly distinguishes from sibling tools which are all image processing tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance: 'Call this before working with large files (≥ 4 MB) or when building multi-step workflows.' While not exhaustive, it gives practical usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
split_spritesheetAInspect
Slice a spritesheet PNG into individual frames, generate TexturePacker-compatible atlas JSON, or both. Provide columns+rows (grid mode) or cell_width+cell_height (cell mode).
| Name | Required | Description | Default |
|---|---|---|---|
| file | Yes | Spritesheet PNG — HTTPS URL, data URI, or output URL from a previous tool call (pass directly, no re-encoding needed). For local files < 4 MB: base64-encode the bytes and prepend "data:image/png;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending. For files ≥ 4 MB: call server_info to get the upload_url, POST the file there (multipart/form-data, field "file", Bearer token), and pass the returned URL. | |
| rows | No | Grid rows (grid mode) | |
| output | No | Default: frames | |
| columns | No | Grid columns (grid mode) | |
| padding | No | ||
| trim_top | No | ||
| row_range | No | ||
| trim_left | No | ||
| cell_width | No | Cell width in px (cell mode) | |
| skip_empty | No | Remove fully transparent frames. Default: true | |
| trim_right | No | ||
| cell_height | No | Cell height in px (cell mode) | |
| frame_count | No | ||
| trim_bottom | No | ||
| column_range | No | e.g. "0-5" or "2" | |
| metadata_format | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | Yes | Download URL for the output file (expires in 1 hour) |
| quota | Yes | |
| expires_at | Yes | ISO 8601 expiry timestamp |
| size_bytes | Yes | Output file size in bytes |
| content_type | Yes | MIME type of the output file (image/png, image/gif, application/zip, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description indicates the tool produces output but does not overly elaborate side effects. Annotations show readOnlyHint=false and destructiveHint=false, implying mutation but no destruction; description adds context about producing frames and JSON. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler, front-loaded with main verb 'Slice' and output. Each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 16 parameters and output schema exist, the description is brief and omits details like default values (e.g., skip_empty=true, output default 'frames'). It could provide more context on parameter interactions or expected output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50%; the description only mentions the two parameter groups (grid vs cell mode), not the many undocumented optional parameters like padding, trim, or frame_count. The schema provides some descriptions, but the description does not compensate for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool slices a spritesheet into frames and/or generates atlas JSON, and distinguishes between grid and cell modes. It is specific and differentiates from sibling tools like gif_to_spritesheet or trim_png.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use grid mode (columns+rows) vs cell mode (cell_width+cell_height), giving clear context. However, it does not explicitly exclude use with GIFs or other formats, nor does it mention sibling tools for alternative tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
spritesheet_to_animationAInspect
Slice a spritesheet PNG into frames and produce an animated GIF or WebP. Provide columns+rows (grid mode) or cell_width+cell_height (cell mode).
| Name | Required | Description | Default |
|---|---|---|---|
| file | Yes | Spritesheet PNG — HTTPS URL, data URI, or output URL from a previous tool call (pass directly, no re-encoding needed). For local files < ~185 KB: base64-encode the bytes and prepend "data:image/png;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending (shell encoders like openssl insert newlines that cause INVALID_BASE64). For larger files or any file encoded via a shell command: call server_info to get the upload_url and token instructions, POST the file there (multipart/form-data, field "file", Bearer token required), and pass the returned URL. | |
| loop | No | Loop count. 0 = infinite. Default: 0 | |
| rows | No | Grid rows (grid mode) | |
| columns | No | Grid columns (grid mode) | |
| padding | No | Pixel gap between cells. Default: 0 | |
| quality | No | WebP quality 0-100. Default: 80 | |
| duration | No | Frame duration in ms. Default: 100 | |
| lossless | No | WebP lossless. Default: false | |
| trim_top | No | ||
| row_range | No | ||
| trim_left | No | ||
| cell_width | No | Cell width in px (cell mode) | |
| skip_empty | No | Auto-remove fully transparent frames. Default: true | |
| trim_right | No | ||
| cell_height | No | Cell height in px (cell mode) | |
| frame_count | No | Actual frame count for incomplete last row | |
| trim_bottom | No | ||
| column_range | No | e.g. "0-5" or "2" | |
| output_format | No | Default: gif |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | Yes | Download URL for the output file (expires in 1 hour) |
| quota | Yes | |
| expires_at | Yes | ISO 8601 expiry timestamp |
| size_bytes | Yes | Output file size in bytes |
| content_type | Yes | MIME type of the output file (image/png, image/gif, application/zip, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate non-read-only and non-destructive behavior. The description adds minimal behavioral context beyond the core operation. It does not cover upload requirements or side effects, but the file parameter description in the schema handles upload instructions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. First sentence states purpose and output format, second explains the two parameter modes. Perfectly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (19 parameters) and the presence of a detailed output schema, the description adequately covers the essential decision (grid vs cell mode). It does not discuss return values or error scenarios, but the output schema and parameter descriptions fill those gaps sufficiently.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 74%, so the schema already documents most parameters. The description adds value by grouping columns+rows as grid mode and cell_width+cell_height as cell mode, which aids understanding. However, it does not elaborate on parameters like trim_top or row_range, so it only slightly enhances the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (slice spritesheet into frames and produce animation), the input (spritesheet PNG), and the output (animated GIF or WebP). It also specifies two modes (grid vs cell) with required parameters, effectively distinguishing this tool from siblings like split_spritesheet or gif_to_frames.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you have a spritesheet and want an animation, offering clear mode alternatives. However, it does not explicitly state when not to use this tool or reference sibling tools, which would help the agent decide between alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trim_pngAInspect
Crop transparent edges from one or more PNG files. Single file returns PNG; multiple files return a ZIP.
| Name | Required | Description | Default |
|---|---|---|---|
| files | Yes | PNG files — HTTPS URLs, data URIs, or output URLs from previous tool calls (pass directly, no re-encoding needed). For local files < ~185 KB each: base64-encode the bytes and prepend "data:image/png;base64," — you MUST strip ALL whitespace and newlines from the base64 string before prepending (shell encoders like openssl insert newlines that cause INVALID_BASE64). For larger files or any file encoded via a shell command: call server_info to get the upload_url and token instructions, POST the file there (multipart/form-data, field "file", Bearer token required), and pass the returned URL. | |
| padding | No | Transparent margin to preserve around trimmed content. Default: 0 | |
| threshold | No | Alpha threshold 0-255. Pixels with alpha ≤ threshold are trimmed. Default: 0 |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | Yes | Download URL for the output file (expires in 1 hour) |
| quota | Yes | |
| expires_at | Yes | ISO 8601 expiry timestamp |
| size_bytes | Yes | Output file size in bytes |
| content_type | Yes | MIME type of the output file (image/png, image/gif, application/zip, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses output format behavior (single vs multiple files returns PNG or ZIP). Adds details on padding and threshold parameters beyond schema. No contradiction with annotations; non-destructive behavior implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph but front-loaded with purpose. All sentences earn their place; however, the encoding instructions could be structured (e.g., bullet points) for easier parsing. Still concise enough given the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers all parameters, output format, edge cases (base64 encoding errors), and cross-tool dependency (server_info for uploads). Output schema exists, so return values are already documented. Complete for a tool with multiple input methods.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds significant value beyond schema: detailed instructions for `files` parameter including base64 whitespace warnings and server upload procedure. Explains `padding` and `threshold` defaults and meanings. Full schema coverage (100%) is supplemented with practical usage tips.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool crops transparent edges from PNG files and specifies output format (single PNG vs ZIP for multiple). Distinguishes from siblings which deal with animations or spritesheets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides comprehensive guidance on file input methods (URLs, data URIs, local files) with specific encoding and upload instructions. Does not explicitly exclude use with non-PNG files or compare to sibling tools, but context implies it's for static PNG trimming.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!