tinify-ai
Server Details
MCP server for Tinify image optimization — one tool, max optimization
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- onepunchtechnology/tinify-ai-mcp-server
- GitHub Stars
- 1
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
3 toolsoptimize_imageOptimize ImageAInspect
Optimize an image: smart lossy compression (typically 60-80% size reduction), optional resize/upscale/format conversion, and AI-generated SEO metadata. Accepts absolute local file paths or remote URLs. In remote/API mode, only remote URLs are supported. Supported input formats: JPG, PNG, WebP, AVIF, GIF, SVG, ICO, HEIC, TIFF, BMP (max 50 MB). Supported output formats: JPG, PNG, WebP, AVIF, GIF, SVG, ICO. Each call costs 3 credits + 1 if SEO tags enabled. Animated GIFs are processed frame-by-frame (each frame optimized individually). Cost = frames × per-frame operations. Use confirm_gif_cost: true after reviewing the cost warning. Free tier: 20 credits/day, no signup. Log in with the login tool for more credits. Use status tool to check remaining credits before batch processing.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | Absolute local file path or remote URL of the image to optimize. Note: in remote/API mode, only remote URLs are supported (no local file paths). Supported inputs: JPG, PNG, WebP, AVIF, GIF (animated supported), HEIC, TIFF, BMP (max 50 MB). Tinify supports high-quality conversion between any input and output format. | |
| output_path | No | Where to save. Accepts a file path (/tmp/out.webp) or directory ending in / (/tmp/images/). If omitted: saves next to original, named with SEO slug when SEO is enabled or .tinified suffix otherwise. URLs save to current working directory. | |
| output_format | No | Output format. Defaults to 'original' (keep input format). Animated GIFs stay animated when output is 'gif'; converting to other formats preserves only the first frame. SVG output from raster input uses vector tracing. ICO output generates a favicon set (16, 24, 32, 48, 256px) unless a specific size is given. | |
| gif_frame_limit | No | Maximum frames to process for animated GIFs (1-100, default 100). Reduces cost by sampling fewer frames while preserving animation. | |
| output_width_px | No | Target width in pixels. Set only width for proportional resize. Set both width and height for exact output dimensions (see output_resize_behavior). | |
| confirm_gif_cost | No | Set to true to proceed with animated GIF processing after seeing cost warning. Required for animated GIFs to prevent unexpected credit consumption. | |
| output_height_px | No | Target height in pixels. Set only height for proportional resize. Set both width and height for exact output dimensions (see output_resize_behavior). | |
| _gif_temp_file_id | No | Internal: temp file ID from a previous GIF cost warning. Skips re-upload. | |
| output_seo_tag_gen | No | Generate SEO metadata (alt text, keywords, filename) and rename output file to SEO slug. Costs 1 extra credit. Default: true. | |
| output_upscale_factor | No | AI upscale factor: 2 (2×) or 4 (4×). Uses Real-ESRGAN for high-quality upscaling. | |
| output_file_size_limit | No | Target maximum output file size in bytes. The server will attempt to meet this limit through additional compression. Not guaranteed. | |
| output_resize_behavior | No | When both width and height are set and aspect ratio differs: 'pad' adds white padding (default), 'crop' smart-crops to fill exact dimensions |
Output Schema
| Name | Required | Description |
|---|---|---|
| output_path | Yes | Absolute path where the optimized file was saved |
| seo_alt_text | Yes | AI-generated image alt text for accessibility and SEO |
| seo_filename | Yes | AI-generated SEO filename slug without extension |
| seo_keywords | Yes | AI-generated keywords describing the image |
| output_format | Yes | Output format: jpg, png, webp, avif, or gif |
| output_width_px | Yes | Width of the output image in pixels |
| output_height_px | Yes | Height of the output image in pixels |
| compression_ratio | Yes | Output-to-input size ratio, e.g. 0.35 means 65% smaller |
| output_size_bytes | Yes | File size of the optimized image in bytes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden effectively: discloses credit costs (3+1), file size limits (50MB), format support, animated GIF frame-by-frame processing costs, and free tier limits (20/day). Could clarify if original files are ever overwritten.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with zero waste: front-loaded purpose, followed by capabilities, constraints, costs, and workflow. Slightly lengthy but every sentence delivers unique value (costs, formats, limits, sibling references).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 12 parameters, 100% schema coverage, and existing output schema, description comprehensively covers costs, authentication needs (login tool), batch workflow prerequisites, and format constraints without needing to describe return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), adds crucial cost semantics: 'Each call costs 3 credits + 1 if SEO tags enabled' explains output_seo_tag_gen implications, and 'Cost = frames × per-frame operations' clarifies confirm_gif_cost purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb+resource ('Optimize an image: smart lossy compression') and clearly distinguishes from siblings by detailing image-specific operations (resize, upscale, format conversion, SEO metadata) that status/upgrade cannot perform.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use status tool to check remaining credits before batch processing' and explains the confirm_gif_cost workflow for cost management. Defines remote/API mode constraints. Lacks explicit 'when not to use' contrast with upgrade tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
statusAccount StatusAInspect
Check your Tinify account status: login state, tier, credits remaining, and credit reset time. Use this before batch processing to verify sufficient credits.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It reveals what data is accessed (login, tier, credits) but omits behavioral traits like idempotency, rate limits, or error states (e.g., unauthenticated access). The verb 'Check' implies read-only behavior but does not explicitly confirm safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads the core function and return values; second sentence provides actionable usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates effectively for the lack of output schema by enumerating return values (login state, tier, credits, reset time). For a zero-parameter status tool, this is functionally complete, though explicit mention of error response format would elevate it to 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, establishing a baseline of 4. The description implies no user input is required ('Check your... status'), which aligns with the empty schema and suggests the operation relies on implicit authentication context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Check') and resource ('Tinify account status'), explicitly listing returned data (login state, tier, credits, reset time). It clearly distinguishes from siblings 'optimize_image' (processing) and 'upgrade' (account modification) by focusing on read-only status retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal guidance ('before batch processing') and rationale ('verify sufficient credits'). However, it misses the opportunity to reference sibling tool 'upgrade' as the logical next step if credits are insufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upgradeUpgrade PlanAInspect
Open the Tinify pricing page in your browser to upgrade your plan for more credits. Plans: Free (50/day), Pro (3,000/month), Max (10,000/month).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It effectively discloses that the tool opens an external browser page (significant behavioral trait) and provides useful context about available pricing tiers (Free, Pro, Max), though it does not specify if the operation blocks or returns confirmation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently structured: first states the action and purpose, second provides relevant pricing context. Every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with no output schema, the description is complete. It explains the action (browser opening), the intent (upgrading), and provides decision-making context (plan tiers) sufficient for an agent to invoke this correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, establishing a baseline score of 4 per evaluation rules. No parameter documentation is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool opens the Tinify pricing page to upgrade the plan for more credits, using specific verb (Open) and resource (pricing page). It clearly distinguishes from siblings optimize_image (image processing) and status (checking status).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'to upgrade your plan for more credits' implies when to use the tool, but does not explicitly state when to use it versus alternatives or provide explicit exclusions. Sibling differentiation relies on tool names rather than descriptive guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!