Skip to main content
Glama

Server Details

Normalize and convert more than 400 file types via TweekIT's hosted MCP streamable HTTP endpoint.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
equilibrium-team/tweekit-mcp
GitHub Stars
1

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
convertBInspect

Convert an uploaded document payload with TweekIT.

The file must already be base64 encoded (see blob). The conversion can be resized and cropped by providing optional geometry parameters. For raster outputs, set alpha/bgColor to control transparency handling.

Args: inext: Source file extension such as pdf, docx, or png. outfmt: Desired output format (Fmt in the API body). blob: Base64 encoded document payload (DocData). apiKey: TweekIT API key (ApiKey header). Falls back to TWEEKIT_API_KEY env var. apiSecret: TweekIT API secret (ApiSecret header). Falls back to TWEEKIT_API_SECRET env var. noRasterize: Forwarded to TweekIT to skip rasterization when possible. width: Optional pixel width to request in the output. height: Optional pixel height to request in the output. x1: Left crop coordinate in source pixels. y1: Top crop coordinate in source pixels. x2: Right crop coordinate in source pixels. y2: Bottom crop coordinate in source pixels. page: Page number to extract for multipage inputs. alpha: Whether the output should preserve alpha transparency. bgColor: Background color to composite behind transparent pixels.

Returns: A FastMCP Image or File payload, or an error description.

ParametersJSON Schema
NameRequiredDescriptionDefault
x1NoLeft crop coordinate in source pixels.
x2NoRight crop coordinate in source pixels.
y1NoTop crop coordinate in source pixels.
y2NoBottom crop coordinate in source pixels.
blobYesBase64 encoded document payload (DocData).
pageNoPage number to convert for multi-page inputs.
alphaNoPreserve alpha transparency when producing raster formats.
inextYesInput file extension (e.g., pdf, docx, png).
widthNoOptional pixel width for the converted output.
apiKeyNoTweekIT API key passed via the ApiKey header. Defaults to the TWEEKIT_API_KEY environment variable when omitted.
heightNoOptional pixel height for the converted output.
outfmtYesRequested output format to send as Fmt.
bgColorNoBackground color (hex RGB) to composite behind transparent pixels.
apiSecretNoTweekIT API secret paired with the apiKey. Defaults to the TWEEKIT_API_SECRET environment variable when omitted.
noRasterizeNoForward to TweekIT to disable rasterization when supported.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses some behavioral traits: it mentions the tool interacts with an external API (TweekIT), requires API credentials with fallback to environment variables, and returns specific payload types (Image or File) or errors. However, it lacks details on rate limits, authentication requirements beyond credentials, error handling specifics, or whether the operation is idempotent or has side effects. For a tool with 15 parameters and no annotations, this is a moderate but incomplete disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with a clear purpose statement, followed by key usage notes, and then lists parameters and returns in a formatted way. Every sentence adds value, though the parameter listing is somewhat redundant given the schema. It could be more front-loaded by emphasizing critical constraints earlier, but overall it's efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (15 parameters, no annotations, no output schema), the description is moderately complete. It covers the purpose, key constraints (base64 encoding, API interaction), and lists parameters and return types. However, it lacks details on error conditions, performance considerations, or examples of common use cases. For a tool with many optional parameters and external dependencies, more contextual guidance would be helpful to ensure correct usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all parameters are documented in the schema. The description adds minimal value beyond the schema: it briefly mentions that geometry parameters are for resizing/cropping and that alpha/bgColor control transparency for raster outputs. However, it doesn't explain parameter interactions, dependencies, or provide examples. With high schema coverage, the baseline is 3, and the description doesn't significantly enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Convert an uploaded document payload with TweekIT.' It specifies the verb (convert) and resource (document payload) and mentions the service (TweekIT). However, it doesn't explicitly differentiate from sibling tools like 'convert_url' or 'doctype', which likely handle similar conversion tasks via different methods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some implied usage guidance: it mentions the file must be base64 encoded and references the 'blob' parameter, and it notes that conversion can be resized/cropped with optional geometry parameters. However, it doesn't explicitly state when to use this tool versus alternatives like 'convert_url' (which might handle URLs instead of uploaded files) or 'doctype' (which might detect file types). No explicit when-not-to-use or prerequisite information is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_urlAInspect

Download a remote file and convert it with TweekIT in one step.

This helper first fetches url, infers the input extension when possible, and then forwards the bytes to convert. Supply fetchHeaders when the remote resource needs authentication or custom headers.

Args: url: Direct download URL for the source document or image. outfmt: Desired output format (Fmt). apiKey: TweekIT API key (ApiKey header). Falls back to TWEEKIT_API_KEY env var. apiSecret: TweekIT API secret (ApiSecret header). Falls back to TWEEKIT_API_SECRET env var. inext: Optional override for the source extension if it cannot be detected from the URL or response headers. noRasterize: Forwarded to TweekIT to skip rasterization when possible. width: Optional pixel width to request in the output. height: Optional pixel height to request in the output. x1: Left crop coordinate in source pixels. y1: Top crop coordinate in source pixels. x2: Right crop coordinate in source pixels. y2: Bottom crop coordinate in source pixels. page: Page number to extract for multipage inputs. alpha: Whether the output should preserve alpha transparency. bgColor: Background color to composite behind transparent pixels. fetchHeaders: Optional mapping of HTTP headers to include when fetching.

Returns: A FastMCP Image or File payload, or an error description.

ParametersJSON Schema
NameRequiredDescriptionDefault
x1NoLeft crop coordinate in source pixels.
x2NoRight crop coordinate in source pixels.
y1NoTop crop coordinate in source pixels.
y2NoBottom crop coordinate in source pixels.
urlYesDirect download URL for the source document or image.
pageNoPage number to convert for multi-page inputs.
alphaNoPreserve alpha transparency when producing raster formats.
inextNoOverride for the detected input extension (e.g., pdf).
widthNoOptional pixel width for the converted output.
apiKeyNoTweekIT API key passed via the ApiKey header. Defaults to the TWEEKIT_API_KEY environment variable when omitted.
heightNoOptional pixel height for the converted output.
outfmtYesRequested output format to send as Fmt.
bgColorNoBackground color (hex RGB) to composite behind transparent pixels.
apiSecretNoTweekIT API secret paired with the apiKey. Defaults to the TWEEKIT_API_SECRET environment variable when omitted.
noRasterizeNoForward to TweekIT to disable rasterization when supported.
fetchHeadersNoOptional HTTP headers to include when downloading the URL.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: the two-step process (fetch then convert), authentication fallbacks to environment variables, and the return type (FastMCP payload or error). However, it lacks details on rate limits, error conditions beyond generic 'error description', or performance characteristics that would help an agent anticipate behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured and front-loaded: the first sentence captures the core purpose, followed by essential behavioral context, then a clean parameter list with helpful annotations, and finally the return value. Every sentence earns its place with zero wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex 16-parameter tool with no annotations and no output schema, the description does well: it explains the workflow, covers authentication, documents all parameters with extra context, and specifies return types. The main gap is lack of detailed error handling or performance expectations, preventing a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context beyond the schema: it explains the tool's internal workflow (infers extension, forwards to convert), clarifies parameter relationships (e.g., 'fetchHeaders' for authentication), and provides practical usage guidance (e.g., 'when the remote resource needs authentication'). This elevates the score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('download', 'convert') and resources ('remote file', 'TweekIT'), distinguishing it from siblings like 'convert' (which likely converts local files) and 'fetch' (which only downloads). The opening sentence precisely defines the two-step operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (for remote files needing conversion) and mentions 'fetchHeaders' for authentication needs. However, it doesn't explicitly state when to use alternatives like 'convert' (for local files) or 'fetch' (for download-only), leaving some sibling differentiation implicit rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

doctypeAInspect

Retrieve a list of supported file formats or map a file extension to its document type.

Args: apiKey (str): The API key for authentication. Falls back to TWEEKIT_API_KEY when omitted. apiSecret (str): The API secret for authentication. Falls back to TWEEKIT_API_SECRET when omitted. extension (str): The file extension to query (e.g., 'jpg', 'pdf') or leave off or use '*' to return all supported input formats.

Returns: Dict[str, Any]: A dictionary containing the supported file formats or an error message.

ParametersJSON Schema
NameRequiredDescriptionDefault
apiKeyNoTweekIT API key passed via the ApiKey header. Defaults to the TWEEKIT_API_KEY environment variable when omitted.
apiSecretNoTweekIT API secret paired with the apiKey. Defaults to the TWEEKIT_API_SECRET environment variable when omitted.
extensionNoFile extension to inspect; use '*' to list all supported inputs.*

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses authentication behavior (API key/secret with fallbacks) and the dual return behavior (list vs. mapping), which is valuable. However, it doesn't mention rate limits, error conditions beyond 'error message', or whether this is a read-only operation (though 'Retrieve' suggests it is).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement followed by Args and Returns sections. Every sentence adds value, though the authentication parameter explanations slightly duplicate schema information. The front-loaded purpose statement efficiently communicates core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters with full schema coverage and an output schema present, the description provides adequate context. It explains the tool's dual modes, authentication fallbacks, and parameter usage. The output schema means return values don't need description. However, more behavioral context (like rate limits or error specifics) would improve completeness for a tool with no annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by clarifying the extension parameter's special behavior ('*' for all formats) and providing concrete examples ('jpg', 'pdf'), which enhances understanding beyond the schema's technical specification. However, it doesn't elaborate on the authentication parameters beyond what the schema already states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Retrieve a list', 'map a file extension') and resources ('supported file formats', 'document type'). It distinguishes itself from sibling tools like 'convert' or 'search' by focusing on format identification rather than transformation or content lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the extension parameter explanation ('leave off or use '*' to return all'), but provides no explicit guidance on when to choose this tool over alternatives like 'fetch' or 'search'. It mentions the tool's dual functionality but doesn't specify scenarios where one mode is preferred over the other.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetchBInspect

Fetch a URL and return content.

  • Images return as FastMCP Image.

  • PDFs return as File(format="pdf").

  • Text/JSON return as a JSON payload with metadata and text.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesHTTP or HTTPS URL to retrieve and normalize.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about return types for different content (images as FastMCP Image, PDFs as File, text/JSON as JSON payload). However, it doesn't mention important behavioral aspects like error handling, timeout behavior, authentication requirements, rate limits, or whether it follows redirects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured. The first sentence states the core purpose, followed by three bullet points that efficiently describe different return behaviors. Every sentence earns its place with zero wasted words, making it easy to scan and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with 100% schema coverage but no annotations or output schema, the description provides adequate but incomplete context. It covers return type variations well, but lacks information about error conditions, performance characteristics, or integration patterns. The absence of an output schema means the description should ideally provide more detail about response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the single 'url' parameter well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate since the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fetch a URL and return content' - a specific verb (fetch) and resource (URL content). It distinguishes from siblings like 'convert' or 'search' by focusing on retrieval rather than transformation or querying. However, it doesn't explicitly differentiate from 'convert_url' which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'convert_url' and 'search' available, there's no indication of when fetch is preferred over those tools or what specific use cases it addresses. The description only states what it does, not when to choose it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.