Pictomancer.ai
Server Details
Transform and optimize images by resizing, compressing, and converting across multiple formats. Streamline complex editing workflows using a multi-step pipeline for efficient sequential processing.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
7 toolsanalyze_imageBInspect
Analyze an image
Fetch an image from a URL or base64 and return its metadata (size in bytes). Always free.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"size_bytes": 1
}| Name | Required | Description | Default |
|---|---|---|---|
| source | Yes | Image source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden. It discloses that the tool accepts URLs or base64 and provides an example response showing the 'size_bytes' format. However, it omits critical behavioral details like supported image formats, size limits, error handling for invalid URLs, or whether the image is temporarily stored.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description opens with the tautological 'Analyze an image' before getting to the substance. The inclusion of HTTP response codes (### Responses, **200**) is atypical for MCP tool descriptions and adds verbosity without value for agent invocation decisions. However, the example response is appropriately helpful given the lack of a structured output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no annotations, the description adequately compensates by providing an example response that clarifies the return value (size_bytes). However, it leaves ambiguity about whether 'size_bytes' is the only metadata returned or if other fields are omitted from the example.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with the 'source' parameter already well-documented as accepting 'a public URL (https://...) or a base64-encoded string.' The description repeats this information ('Fetch an image from a URL or base64') without adding syntax clarification beyond the schema, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches an image and returns metadata specifically 'size in bytes,' which is concrete. However, it lacks differentiation from the sibling tool 'get_format_info,' which likely also returns image metadata, leaving ambiguity about which to use for general metadata retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The only usage guidance provided is 'Always free,' which is a cost signal but offers no indication of when to use this tool versus siblings like 'get_format_info' or 'compress_image.' There are no prerequisites, exclusions, or workflow positioning hints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compress_imageBInspect
Compress an image
Re-encode an image with quality/format options to reduce file size. Supports jpeg, png, webp, tiff, gif.
Responses:
200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg
Example Response:
"string"Content-Type: image/png
Example Response:
"string"Content-Type: image/webp
Example Response:
"string"| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Quality (1-100). Lower = smaller file. Typical values: 60-80 for web, 85-95 for print. Maps to libvips Q parameter. | |
| strip | No | Strip metadata (EXIF, ICC profile, etc.) from the output. Reduces file size slightly. | |
| format | No | Output format: jpeg, png, webp, tiff, or gif. If omitted, the original format is preserved. | |
| source | Yes | Image source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool 're-encode[s]' and documents the response types (image binary with specific Content-Types: jpeg, png, webp) in the Responses section, which compensates partially for the lack of output schema. However, it omits safety warnings (e.g., loss of transparency when converting PNG to JPEG) and error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, but the '### Responses' section contains repetitive boilerplate (three identical JSON 'string' examples for different content types) that wastes tokens without adding distinct information. The structure is logical but could be tightened.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description provides necessary behavioral context by documenting the 200 response status and possible Content-Type headers (image/jpeg, image/png, etc.). With 100% parameter schema coverage and only 4 simple parameters, the description provides adequate completeness, though error scenarios remain undocumented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (q, strip, format, source are all well-documented). The description mentions 'quality/format options' and lists supported formats, but this largely repeats the schema content rather than adding new semantic context or usage examples beyond the structured data.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Compress[es] an image' and 'Re-encode[s] an image with quality/format options to reduce file size.' It specifies the supported formats (jpeg, png, webp, tiff, gif). However, it does not explicitly differentiate from the sibling 'convert_image' tool, which may also change formats but likely without the specific file-size reduction focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does (re-encode to reduce file size) but provides no guidance on when to use it versus siblings like 'convert_image' or 'resize_image.' It lacks warnings about lossy compression trade-offs or when to strip metadata versus preserve it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
convert_imageAInspect
Convert image format
Convert an image to a different format (jpeg, png, webp, tiff, gif). Optionally set quality, strip metadata, or enable lossless mode (webp).
Responses:
200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg
Example Response:
"string"Content-Type: image/png
Example Response:
"string"Content-Type: image/webp
Example Response:
"string"| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Quality (1-100). Maps to libvips Q parameter. | |
| strip | No | Strip metadata (EXIF, ICC profile, etc.) from the output. | |
| format | Yes | Target format: jpeg, png, webp, tiff, or gif. | |
| source | Yes | Image source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...). | |
| lossless | No | Enable lossless encoding. Only applies to webp. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses key behavioral traits: supported formats, that metadata (EXIF/ICC) can be stripped, and that lossless encoding is webp-only. It also includes response format information (binary/image data) compensating for the lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The core functional description is concise (two sentences). However, the inclusion of verbose HTTP response documentation with repetitive JSON examples ('string') adds unnecessary bulk that does not significantly aid tool selection. The structure mixes functional description with API response specs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description provides adequate coverage: it explains the conversion capability, lists supported formats, documents response behavior (200/binary), and notes important constraints (lossless webp-only). With rich schema coverage for inputs, this is sufficient for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value beyond the schema, primarily grouping the optional parameters (quality, strip, lossless) and emphasizing the webp-specific nature of lossless mode, which is already documented in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts images to specific formats (jpeg, png, webp, tiff, gif) with specific optional features. However, it does not explicitly differentiate from sibling tools like 'compress_image' or 'image_pipeline' which may overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage contexts by mentioning optional capabilities (quality settings, metadata stripping, lossless mode), including the important constraint that lossless only applies to webp. However, it lacks explicit guidance on when to choose this tool over 'compress_image' or other siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
crop_imageBInspect
Crop an image
Extract a rectangular region from an image. Specify the top-left corner (x, y) and the dimensions (width, height) in pixels.
Responses:
200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg
Example Response:
"string"Content-Type: image/png
Example Response:
"string"Content-Type: image/webp
Example Response:
"string"| Name | Required | Description | Default |
|---|---|---|---|
| x | Yes | Left edge of the crop rectangle in pixels. | |
| y | Yes | Top edge of the crop rectangle in pixels. | |
| width | Yes | Width of the crop rectangle in pixels. | |
| format | No | Output format: jpeg, png, webp, tiff, or gif. If omitted, the original format is preserved. | |
| height | Yes | Height of the crop rectangle in pixels. | |
| source | Yes | Image source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses output behavior by listing response formats (jpeg, png, webp) and that it returns binary image data, compensating for the lack of output schema. However, it omits error handling, coordinate boundary constraints, and whether the operation is idempotent or destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The core description is appropriately front-loaded with two clear sentences explaining the operation. However, the 'Responses' section is unnecessarily verbose, containing repetitive example blocks (all showing 'string') and HTTP-specific metadata that adds noise without aiding tool selection for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 6 parameters with complete schema coverage and no output schema, the description adequately covers the functional contract by explaining input semantics and output formats. It lacks error documentation and coordinate system constraints (e.g., zero-indexing), but provides sufficient context for basic invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds semantic context by framing x and y as the 'top-left corner,' which helps conceptualize the coordinate system, but otherwise largely restates the schema's parameter descriptions without adding syntax examples or format constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Crop[s] an image' and 'Extract a rectangular region,' providing a specific verb and resource. It distinguishes this from sibling tools like resize_image or convert_image by specifying the rectangular extraction mechanism and coordinate-based operation, though it does not explicitly contrast use cases with named siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains how to use the tool (specifying x, y, width, height) but provides no explicit guidance on when to choose this over alternatives like resize_image or analyze_image. There are no 'when-not' conditions, prerequisites, or workflow recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_format_infoBInspect
Get supported formats and options
Returns supported output formats and their configurable options.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool returns format options and mentions a JSON response, but lacks critical behavioral context such as whether results are cached, if the call is idempotent, or what specific 'options' refers to (compression settings? quality ranges?).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The first sentence is efficient, but the second sentence largely restates the first ('Returns supported output formats...'). The inclusion of the HTTP response documentation ('### Responses: **200**...') adds structural noise irrelevant to an AI agent's tool selection decision.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so the description must explain return values. It mentions 'supported output formats and their configurable options' but remains vague about the structure (is it a list? nested objects?) and omits the image-specific context that would make the return value meaningful given the tool ecosystem.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which establishes a baseline of 4 per evaluation rules. The description appropriately does not invent parameters, maintaining consistency with the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'supported formats and their configurable options' using the specific verb 'Get'. However, it fails to specify that these are image formats (evident from siblings like convert_image, compress_image), which would help the agent understand the domain context immediately.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus siblings. It should explicitly state this is for discovering capabilities before using convert_image or compress_image, but instead offers no contextual usage hints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
image_pipelineAInspect
Run a multi-step image pipeline
Chain multiple operations (resize, compress, convert, crop) in sequence. The image is fetched once, then each operation is applied to the output of the previous one. Max 10 operations per pipeline.
Responses:
200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg
Example Response:
"string"Content-Type: image/png
Example Response:
"string"Content-Type: image/webp
Example Response:
"string"| Name | Required | Description | Default |
|---|---|---|---|
| source | Yes | Image source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...). | |
| operations | Yes | Ordered list of operations to apply sequentially. Each operation receives the output of the previous one. Max 10. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the execution model (sequential processing with single fetch), the operation limit (max 10), and response formats (image/jpeg, png, webp). It could be improved by mentioning error handling behavior (what happens if an operation in the chain fails).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The core description is appropriately concise (2 sentences explaining purpose and mechanism). However, the '### Responses' section containing HTTP status codes and placeholder JSON examples ('string') constitutes unnecessary bloat for an AI agent selecting tools; this technical documentation does not aid in tool selection and could be removed or simplified.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (chained operations with nested parameters) and lack of output schema, the description adequately covers the essential behavioral contract including the sequential processing model, operation limits, and return content types. It sufficiently prepares an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting both the source parameter and the operations array structure including the nested operation types and params. The description mentions the max 10 limit and lists the operation types, but since the schema is comprehensive, the description adds only marginal semantic value beyond the structured documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Run[s] a multi-step image pipeline' and specifies it chains operations (resize, compress, convert, crop) in sequence. This effectively distinguishes it from single-operation siblings like compress_image or resize_image by emphasizing the multi-step/chaining capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear workflow guidance explaining that 'The image is fetched once, then each operation is applied to the output of the previous one' and notes the 'Max 10 operations per pipeline' constraint. However, it does not explicitly state when to choose this over individual operation tools (e.g., 'use this instead of individual tools when you need multiple operations').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resize_imageBInspect
Resize an image
Scale an image by a factor. Use 'scale' for uniform scaling, or 'scale_x'/'scale_y' for independent axes. Values are float factors (e.g. 0.5 = half size).
Responses:
200: Processed image binary (Success Response) Content-Type: application/json Content-Type: image/jpeg
Example Response:
"string"Content-Type: image/png
Example Response:
"string"Content-Type: image/webp
Example Response:
"string"| Name | Required | Description | Default |
|---|---|---|---|
| scale | No | Uniform scale factor applied to both axes (e.g. 0.5 = half size). Use this for simple scaling; use scale_x/scale_y for independent axes. | |
| format | No | Output format: jpeg, png, webp, tiff, or gif. If omitted, the original format is preserved. | |
| source | Yes | Image source: a public URL (https://...) or a base64-encoded string (optionally as a data URI like data:image/png;base64,...). | |
| scale_x | No | Horizontal scale factor (e.g. 0.5 = half width). If only scale_x is given, scale_y defaults to the same value. | |
| scale_y | No | Vertical scale factor (e.g. 0.75 = 75% height). Optional; defaults to scale_x if omitted. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry full behavioral burden. It documents response formats (image/jpeg, png, webp binaries) compensating for the lack of output schema, but omits safety information (read-only vs destructive), rate limits, or maximum dimension constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The functional description is efficiently front-loaded, but the Responses section is bloated with repetitive formatting (multiple Content-Type headers, redundant JSON 'string' examples) that could be condensed without losing information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately covers response types (200 status, binary content types) and examples. Parameter semantics are fully covered by the schema. Minor gaps remain regarding error handling and operational limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description adds conceptual grouping of parameters ('uniform scaling' vs 'independent axes') but does not significantly expand on the detailed examples and format specifications already present in the schema property descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Scale an image by a factor') and mechanism (uniform vs independent axes scaling), which implicitly distinguishes it from cropping or format conversion. However, it lacks explicit differentiation from sibling tools like 'crop_image' that also modify dimensions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on parameter selection ('Use scale for uniform scaling, or scale_x/scale_y for independent axes'), but offers no guidance on when to choose this tool over siblings like 'crop_image' or 'image_pipeline' for dimension changes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!