Skip to main content
Glama

Server Details

Image processing for AI agents. Resize, convert, compress, and pipeline images.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but compress_image and convert_image overlap in format conversion. Descriptions help differentiate (compress reduces size, convert changes format), but an agent might still be uncertain.

Naming Consistency4/5

Five tools follow verb_noun pattern (analyze_image, compress_image, etc.). get_format_info also fits. image_pipeline breaks the pattern as noun_noun, causing minor inconsistency.

Tool Count5/5

Seven tools cover core image operations (analyze, compress, convert, crop, resize, info, and pipeline) without bloat. The count is well-suited for the domain.

Completeness3/5

The set provides basic CRUD-like operations (read metadata, convert, resize, crop) but lacks common operations like rotate, flip, or filters. The pipeline partially compensates, but gaps remain.

Available Tools

7 tools
analyze_imageAInspect

Analyze an image

Fetch an image from a URL or base64 and return its metadata (size in bytes). Always free.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "size_bytes": 1
}
ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, but the description explains it fetches an image and returns size in bytes. It does not disclose error handling, side effects, or that it is read-only, which would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is short and front-loaded with the purpose. The example response adds clarity. However, the first sentence is somewhat redundant with the title, and the 'Always free' could be integrated.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers purpose, parameter format, and an example response. It does not explicitly address error conditions but is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of the single parameter with a detailed description. The tool description adds minimal extra value beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes an image from a URL or base64 and returns its size in bytes, which distinguishes it from sibling tools like compress_image or crop_image that modify images.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when needing image metadata (size), and notes it is always free. However, it lacks explicit when-not-to-use or mention of alternative tools for other analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compress_imageBInspect

Compress an image

Re-encode an image with quality/format options to reduce file size. Supports jpeg, png, webp, tiff, gif.

Responses:

200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg

Example Response:

"string"

Content-Type: image/png

Example Response:

"string"

Content-Type: image/webp

Example Response:

"string"
ParametersJSON Schema
NameRequiredDescriptionDefault
qNoQuality (1-100). Lower = smaller file. Typical values: 60-80 for web, 85-95 for print. Maps to libvips Q parameter.
stripNoStrip metadata (EXIF, ICC profile, etc.) from the output. Reduces file size slightly.
formatNoOutput format: jpeg, png, webp, tiff, or gif. If omitted, the original format is preserved.
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden. It mentions that the tool re-encodes and returns binary images with various content types, but it omits behavioral details such as whether the operation is destructive, required permissions, rate limits, or potential side effects like quality loss. The response examples are helpful but insufficient for full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with a clear summary but becomes verbose with extensive response examples that could be condensed. It is structured but includes redundant formatting. The space could be used more efficiently to convey tool behavior or usage constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's purpose, supported formats, and response types, which is adequate for a tool with a simple single output. However, without an output schema, the description should more clearly define the return structure (e.g., what the JSON string contains). It lacks details on error cases or edge behaviors, leaving some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with descriptions, so the baseline is 3. The description adds no new parameter information beyond the schema. It does not explain interactions or provide additional context for parameter values, so it does not elevate the score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Compress an image' and 'Re-encode an image with quality/format options to reduce file size.' It effectively communicates the core purpose and differentiates from sibling tools like convert_image (which may not focus on compression) and resize_image (which changes dimensions). The supported formats are listed, adding specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly provide guidance on when to use this tool versus alternatives (e.g., convert_image, image_pipeline). It implies compression use but lacks explicit 'when-to-use' or 'when-not-to-use' instructions. Sibling names are present in context but not leveraged in the description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_imageAInspect

Convert image format

Convert an image to a different format (jpeg, png, webp, tiff, gif). Optionally set quality, strip metadata, or enable lossless mode (webp).

Responses:

200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg

Example Response:

"string"

Content-Type: image/png

Example Response:

"string"

Content-Type: image/webp

Example Response:

"string"
ParametersJSON Schema
NameRequiredDescriptionDefault
qNoQuality (1-100). Maps to libvips Q parameter.
stripNoStrip metadata (EXIF, ICC profile, etc.) from the output.
formatYesTarget format: jpeg, png, webp, tiff, or gif.
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
losslessNoEnable lossless encoding. Only applies to webp.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It explains that the tool returns binary with Content-Type, and mentions optional parameters like quality, strip, and lossless. However, it does not discuss side effects, rate limits, or authentication.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear title, a concise paragraph, and response examples. However, the response examples are somewhat lengthy and could be more concisely represented.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains return values (binary with Content-Type). It covers required and optional parameters and provides format examples. Missing error handling details but is otherwise complete for a conversion tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds a brief summary of optional parameters but does not provide additional semantics beyond what the schema already offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Convert image format' and lists target formats, clearly indicating the tool's purpose. It distinguishes from siblings like compress_image and crop_image by focusing solely on format conversion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (convert format) but does not explicitly mention when not to use or suggest alternatives like resize_image or compress_image. No explicit usage guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

crop_imageAInspect

Crop an image

Extract a rectangular region from an image. Specify the top-left corner (x, y) and the dimensions (width, height) in pixels.

Responses:

200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg

Example Response:

"string"

Content-Type: image/png

Example Response:

"string"

Content-Type: image/webp

Example Response:

"string"
ParametersJSON Schema
NameRequiredDescriptionDefault
xYesLeft edge of the crop rectangle in pixels.
yYesTop edge of the crop rectangle in pixels.
widthYesWidth of the crop rectangle in pixels.
formatNoOutput format: jpeg, png, webp, tiff, or gif. If omitted, the original format is preserved.
heightYesHeight of the crop rectangle in pixels.
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description partially discloses behavioral traits by listing response content types and providing examples. However, it omits critical details such as error handling for invalid parameters, side effects (no mutation of source), and required permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is structured with a summary and separate response section, but the response examples are verbose (repeated 'string' for each content type). Overall efficient, but could be more concise by consolidating the response examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the operation and response format adequately, but lacks details on edge cases (e.g., cropping out of bounds) and error behavior. Given the absence of an output schema, the response examples help, but completeness is not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so each parameter is already well-documented. The description adds little beyond repeating x, y, width, height from the schema, and only briefly mentions format. It does not provide meaningful new semantic context for any parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool crops an image by extracting a rectangular region, using specific verb 'crop' and resource 'image'. It is directly distinguishable from siblings like resize_image (changes dimensions) and compress_image (reduces file size), avoiding any tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives (e.g., resize_image, image_pipeline). The description only explains what the tool does without specifying context or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_format_infoAInspect

Get supported formats and options

Returns supported output formats and their configurable options.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. It implies a read operation but does not explicitly state it is non-destructive or any behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Short and to the point, but could be better structured with explicit labeling of response content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter info tool, but lacks details about possible response structure or any prerequisites.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, and schema coverage is 100%. Description does not need to add parameter meaning, and it correctly omits such info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns supported formats and options, which is a specific resource. It distinguishes from sibling image processing tools by focusing on information retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage for getting format info before image conversion, but no explicit guidance on when to use or not use, nor alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

image_pipelineAInspect

Run a multi-step image pipeline

Chain multiple operations (resize, compress, convert, crop) in sequence. The image is fetched once, then each operation is applied to the output of the previous one. Max 10 operations per pipeline.

Responses:

200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg

Example Response:

"string"

Content-Type: image/png

Example Response:

"string"

Content-Type: image/webp

Example Response:

"string"
ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
operationsYesOrdered list of operations to apply sequentially. Each operation receives the output of the previous one. Max 10.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that the image is fetched once, operations are applied sequentially, and max 10 operations. It doesn't mention error handling or permissions but is transparent about the processing flow.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is somewhat concise but includes response examples that could be streamlined. It is structured with headings but could be more compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description includes response formats (Content-Type, binary). It covers key aspects like fetching and sequential application but could mention error details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, baseline is 3. The description adds context about pipeline chaining but doesn't add much beyond what the schema already provides for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it runs a multi-step image pipeline, chaining operations like resize, compress, convert, crop. This distinguishes it from sibling tools that perform single operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly explains when to use this tool (for chaining multiple operations) and implies using individual tools for single steps. It also specifies a max of 10 operations per pipeline.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resize_imageAInspect

Resize an image

Scale an image by a factor. Use 'scale' for uniform scaling, or 'scale_x'/'scale_y' for independent axes. Values are float factors (e.g. 0.5 = half size).

Responses:

200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg

Example Response:

"string"

Content-Type: image/png

Example Response:

"string"

Content-Type: image/webp

Example Response:

"string"
ParametersJSON Schema
NameRequiredDescriptionDefault
scaleNoUniform scale factor applied to both axes (e.g. 0.5 = half size). Use this for simple scaling; use scale_x/scale_y for independent axes.
formatNoOutput format: jpeg, png, webp, tiff, or gif. If omitted, the original format is preserved.
sourceYesImage source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...).
scale_xNoHorizontal scale factor (e.g. 0.5 = half width). If only scale_x is given, scale_y defaults to the same value.
scale_yNoVertical scale factor (e.g. 0.75 = 75% height). Optional; defaults to scale_x if omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description details the scaling operation, input sources (URL/base64), and output formats (jpeg, png, webp) with example responses. It discloses that the tool returns an image binary, making behavior transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise but the example responses are repetitive and unhelpful (just 'string'). It is front-loaded well with purpose and parameter guidance, but the response section could be streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers input, scaling, and output but lacks error handling, format compatibility, or size limits. Given no output schema, the response description is adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and schema descriptions already explain parameters clearly. The tool description adds minimal new information beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Resize an image' with specific instructions on scaling factors. It distinguishes this tool from siblings like crop, compress, convert, etc., via explicit focus on scaling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance on using scale vs scale_x/scale_y for uniform vs independent scaling, but does not explicitly guide when to use this tool over sibling tools like crop_image or compress_image.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources