Skip to main content
Glama
Ownership verified

Server Details

Transform and optimize images by resizing, compressing, and converting across multiple formats. Streamline complex editing workflows using a multi-step pipeline for efficient sequential processing.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

7 tools
analyze_imageBInspect

Analyze an image

Fetch an image from a URL or base64 and return its metadata (size in bytes). Always free.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "size_bytes": 1
}
ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden. It discloses that the tool accepts URLs or base64 and provides an example response showing the 'size_bytes' format. However, it omits critical behavioral details like supported image formats, size limits, error handling for invalid URLs, or whether the image is temporarily stored.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description opens with the tautological 'Analyze an image' before getting to the substance. The inclusion of HTTP response codes (### Responses, **200**) is atypical for MCP tool descriptions and adds verbosity without value for agent invocation decisions. However, the example response is appropriately helpful given the lack of a structured output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no annotations, the description adequately compensates by providing an example response that clarifies the return value (size_bytes). However, it leaves ambiguity about whether 'size_bytes' is the only metadata returned or if other fields are omitted from the example.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with the 'source' parameter already well-documented as accepting 'a public URL (https://...) or a base64-encoded string.' The description repeats this information ('Fetch an image from a URL or base64') without adding syntax clarification beyond the schema, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches an image and returns metadata specifically 'size in bytes,' which is concrete. However, it lacks differentiation from the sibling tool 'get_format_info,' which likely also returns image metadata, leaving ambiguity about which to use for general metadata retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The only usage guidance provided is 'Always free,' which is a cost signal but offers no indication of when to use this tool versus siblings like 'get_format_info' or 'compress_image.' There are no prerequisites, exclusions, or workflow positioning hints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compress_imageBInspect

Compress an image

Re-encode an image with quality/format options to reduce file size. Supports jpeg, png, webp, tiff, gif.

Responses:

200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg

Example Response:

"string"

Content-Type: image/png

Example Response:

"string"

Content-Type: image/webp

Example Response:

"string"
ParametersJSON Schema
NameRequiredDescriptionDefault
qNoQuality (1-100). Lower = smaller file. Typical values: 60-80 for web, 85-95 for print. Maps to libvips Q parameter.
stripNoStrip metadata (EXIF, ICC profile, etc.) from the output. Reduces file size slightly.
formatNoOutput format: jpeg, png, webp, tiff, or gif. If omitted, the original format is preserved.
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool 're-encode[s]' and documents the response types (image binary with specific Content-Types: jpeg, png, webp) in the Responses section, which compensates partially for the lack of output schema. However, it omits safety warnings (e.g., loss of transparency when converting PNG to JPEG) and error handling behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, but the '### Responses' section contains repetitive boilerplate (three identical JSON 'string' examples for different content types) that wastes tokens without adding distinct information. The structure is logical but could be tightened.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description provides necessary behavioral context by documenting the 200 response status and possible Content-Type headers (image/jpeg, image/png, etc.). With 100% parameter schema coverage and only 4 simple parameters, the description provides adequate completeness, though error scenarios remain undocumented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (q, strip, format, source are all well-documented). The description mentions 'quality/format options' and lists supported formats, but this largely repeats the schema content rather than adding new semantic context or usage examples beyond the structured data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Compress[es] an image' and 'Re-encode[s] an image with quality/format options to reduce file size.' It specifies the supported formats (jpeg, png, webp, tiff, gif). However, it does not explicitly differentiate from the sibling 'convert_image' tool, which may also change formats but likely without the specific file-size reduction focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what the tool does (re-encode to reduce file size) but provides no guidance on when to use it versus siblings like 'convert_image' or 'resize_image.' It lacks warnings about lossy compression trade-offs or when to strip metadata versus preserve it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_imageAInspect

Convert image format

Convert an image to a different format (jpeg, png, webp, tiff, gif). Optionally set quality, strip metadata, or enable lossless mode (webp).

Responses:

200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg

Example Response:

"string"

Content-Type: image/png

Example Response:

"string"

Content-Type: image/webp

Example Response:

"string"
ParametersJSON Schema
NameRequiredDescriptionDefault
qNoQuality (1-100). Maps to libvips Q parameter.
stripNoStrip metadata (EXIF, ICC profile, etc.) from the output.
formatYesTarget format: jpeg, png, webp, tiff, or gif.
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
losslessNoEnable lossless encoding. Only applies to webp.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses key behavioral traits: supported formats, that metadata (EXIF/ICC) can be stripped, and that lossless encoding is webp-only. It also includes response format information (binary/image data) compensating for the lack of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core functional description is concise (two sentences). However, the inclusion of verbose HTTP response documentation with repetitive JSON examples ('string') adds unnecessary bulk that does not significantly aid tool selection. The structure mixes functional description with API response specs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description provides adequate coverage: it explains the conversion capability, lists supported formats, documents response behavior (200/binary), and notes important constraints (lossless webp-only). With rich schema coverage for inputs, this is sufficient for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value beyond the schema, primarily grouping the optional parameters (quality, strip, lossless) and emphasizing the webp-specific nature of lossless mode, which is already documented in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts images to specific formats (jpeg, png, webp, tiff, gif) with specific optional features. However, it does not explicitly differentiate from sibling tools like 'compress_image' or 'image_pipeline' which may overlap in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage contexts by mentioning optional capabilities (quality settings, metadata stripping, lossless mode), including the important constraint that lossless only applies to webp. However, it lacks explicit guidance on when to choose this tool over 'compress_image' or other siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

crop_imageBInspect

Crop an image

Extract a rectangular region from an image. Specify the top-left corner (x, y) and the dimensions (width, height) in pixels.

Responses:

200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg

Example Response:

"string"

Content-Type: image/png

Example Response:

"string"

Content-Type: image/webp

Example Response:

"string"
ParametersJSON Schema
NameRequiredDescriptionDefault
xYesLeft edge of the crop rectangle in pixels.
yYesTop edge of the crop rectangle in pixels.
widthYesWidth of the crop rectangle in pixels.
formatNoOutput format: jpeg, png, webp, tiff, or gif. If omitted, the original format is preserved.
heightYesHeight of the crop rectangle in pixels.
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses output behavior by listing response formats (jpeg, png, webp) and that it returns binary image data, compensating for the lack of output schema. However, it omits error handling, coordinate boundary constraints, and whether the operation is idempotent or destructive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core description is appropriately front-loaded with two clear sentences explaining the operation. However, the 'Responses' section is unnecessarily verbose, containing repetitive example blocks (all showing 'string') and HTTP-specific metadata that adds noise without aiding tool selection for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 6 parameters with complete schema coverage and no output schema, the description adequately covers the functional contract by explaining input semantics and output formats. It lacks error documentation and coordinate system constraints (e.g., zero-indexing), but provides sufficient context for basic invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds semantic context by framing x and y as the 'top-left corner,' which helps conceptualize the coordinate system, but otherwise largely restates the schema's parameter descriptions without adding syntax examples or format constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Crop[s] an image' and 'Extract a rectangular region,' providing a specific verb and resource. It distinguishes this from sibling tools like resize_image or convert_image by specifying the rectangular extraction mechanism and coordinate-based operation, though it does not explicitly contrast use cases with named siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains how to use the tool (specifying x, y, width, height) but provides no explicit guidance on when to choose this over alternatives like resize_image or analyze_image. There are no 'when-not' conditions, prerequisites, or workflow recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_format_infoBInspect

Get supported formats and options

Returns supported output formats and their configurable options.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool returns format options and mentions a JSON response, but lacks critical behavioral context such as whether results are cached, if the call is idempotent, or what specific 'options' refers to (compression settings? quality ranges?).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The first sentence is efficient, but the second sentence largely restates the first ('Returns supported output formats...'). The inclusion of the HTTP response documentation ('### Responses: **200**...') adds structural noise irrelevant to an AI agent's tool selection decision.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description must explain return values. It mentions 'supported output formats and their configurable options' but remains vague about the structure (is it a list? nested objects?) and omits the image-specific context that would make the return value meaningful given the tool ecosystem.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which establishes a baseline of 4 per evaluation rules. The description appropriately does not invent parameters, maintaining consistency with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'supported formats and their configurable options' using the specific verb 'Get'. However, it fails to specify that these are image formats (evident from siblings like convert_image, compress_image), which would help the agent understand the domain context immediately.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings. It should explicitly state this is for discovering capabilities before using convert_image or compress_image, but instead offers no contextual usage hints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

image_pipelineAInspect

Run a multi-step image pipeline

Chain multiple operations (resize, compress, convert, crop) in sequence. The image is fetched once, then each operation is applied to the output of the previous one. Max 10 operations per pipeline.

Responses:

200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg

Example Response:

"string"

Content-Type: image/png

Example Response:

"string"

Content-Type: image/webp

Example Response:

"string"
ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
operationsYesOrdered list of operations to apply sequentially. Each operation receives the output of the previous one. Max 10.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the execution model (sequential processing with single fetch), the operation limit (max 10), and response formats (image/jpeg, png, webp). It could be improved by mentioning error handling behavior (what happens if an operation in the chain fails).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core description is appropriately concise (2 sentences explaining purpose and mechanism). However, the '### Responses' section containing HTTP status codes and placeholder JSON examples ('string') constitutes unnecessary bloat for an AI agent selecting tools; this technical documentation does not aid in tool selection and could be removed or simplified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (chained operations with nested parameters) and lack of output schema, the description adequately covers the essential behavioral contract including the sequential processing model, operation limits, and return content types. It sufficiently prepares an agent to invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting both the source parameter and the operations array structure including the nested operation types and params. The description mentions the max 10 limit and lists the operation types, but since the schema is comprehensive, the description adds only marginal semantic value beyond the structured documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Run[s] a multi-step image pipeline' and specifies it chains operations (resize, compress, convert, crop) in sequence. This effectively distinguishes it from single-operation siblings like compress_image or resize_image by emphasizing the multi-step/chaining capability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear workflow guidance explaining that 'The image is fetched once, then each operation is applied to the output of the previous one' and notes the 'Max 10 operations per pipeline' constraint. However, it does not explicitly state when to choose this over individual operation tools (e.g., 'use this instead of individual tools when you need multiple operations').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resize_imageBInspect

Resize an image

Scale an image by a factor. Use 'scale' for uniform scaling, or 'scale_x'/'scale_y' for independent axes. Values are float factors (e.g. 0.5 = half size).

Responses:

200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg

Example Response:

"string"

Content-Type: image/png

Example Response:

"string"

Content-Type: image/webp

Example Response:

"string"
ParametersJSON Schema
NameRequiredDescriptionDefault
scaleNoUniform scale factor applied to both axes (e.g. 0.5 = half size). Use this for simple scaling; use scale_x/scale_y for independent axes.
formatNoOutput format: jpeg, png, webp, tiff, or gif. If omitted, the original format is preserved.
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
scale_xNoHorizontal scale factor (e.g. 0.5 = half width). If only scale_x is given, scale_y defaults to the same value.
scale_yNoVertical scale factor (e.g. 0.75 = 75% height). Optional; defaults to scale_x if omitted.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry full behavioral burden. It documents response formats (image/jpeg, png, webp binaries) compensating for the lack of output schema, but omits safety information (read-only vs destructive), rate limits, or maximum dimension constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The functional description is efficiently front-loaded, but the Responses section is bloated with repetitive formatting (multiple Content-Type headers, redundant JSON 'string' examples) that could be condensed without losing information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately covers response types (200 status, binary content types) and examples. Parameter semantics are fully covered by the schema. Minor gaps remain regarding error handling and operational limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. The description adds conceptual grouping of parameters ('uniform scaling' vs 'independent axes') but does not significantly expand on the detailed examples and format specifications already present in the schema property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Scale an image by a factor') and mechanism (uniform vs independent axes scaling), which implicitly distinguishes it from cropping or format conversion. However, it lacks explicit differentiation from sibling tools like 'crop_image' that also modify dimensions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear guidance on parameter selection ('Use scale for uniform scaling, or scale_x/scale_y for independent axes'), but offers no guidance on when to choose this tool over siblings like 'crop_image' or 'image_pipeline' for dimension changes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources