TweekIT MCP Server
Server Details
Normalize and convert more than 400 file types via TweekIT's hosted MCP streamable HTTP endpoint.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- equilibrium-team/tweekit-mcp
- GitHub Stars
- 1
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsconvertBInspect
Convert an uploaded document payload with TweekIT.
The file must already be base64 encoded (see blob). The conversion can be
resized and cropped by providing optional geometry parameters. For raster
outputs, set alpha/bgColor to control transparency handling.
Args:
inext: Source file extension such as pdf, docx, or png.
outfmt: Desired output format (Fmt in the API body).
blob: Base64 encoded document payload (DocData).
apiKey: TweekIT API key (ApiKey header). Falls back to TWEEKIT_API_KEY env var.
apiSecret: TweekIT API secret (ApiSecret header). Falls back to TWEEKIT_API_SECRET env var.
noRasterize: Forwarded to TweekIT to skip rasterization when possible.
width: Optional pixel width to request in the output.
height: Optional pixel height to request in the output.
x1: Left crop coordinate in source pixels.
y1: Top crop coordinate in source pixels.
x2: Right crop coordinate in source pixels.
y2: Bottom crop coordinate in source pixels.
page: Page number to extract for multipage inputs.
alpha: Whether the output should preserve alpha transparency.
bgColor: Background color to composite behind transparent pixels.
Returns:
A FastMCP Image or File payload, or an error description.
| Name | Required | Description | Default |
|---|---|---|---|
| x1 | No | Left crop coordinate in source pixels. | |
| x2 | No | Right crop coordinate in source pixels. | |
| y1 | No | Top crop coordinate in source pixels. | |
| y2 | No | Bottom crop coordinate in source pixels. | |
| blob | Yes | Base64 encoded document payload (DocData). | |
| page | No | Page number to convert for multi-page inputs. | |
| alpha | No | Preserve alpha transparency when producing raster formats. | |
| inext | Yes | Input file extension (e.g., pdf, docx, png). | |
| width | No | Optional pixel width for the converted output. | |
| apiKey | No | TweekIT API key passed via the ApiKey header. Defaults to the TWEEKIT_API_KEY environment variable when omitted. | |
| height | No | Optional pixel height for the converted output. | |
| outfmt | Yes | Requested output format to send as Fmt. | |
| bgColor | No | Background color (hex RGB) to composite behind transparent pixels. | |
| apiSecret | No | TweekIT API secret paired with the apiKey. Defaults to the TWEEKIT_API_SECRET environment variable when omitted. | |
| noRasterize | No | Forward to TweekIT to disable rasterization when supported. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses some behavioral traits: it mentions the tool interacts with an external API (TweekIT), requires API credentials with fallback to environment variables, and returns specific payload types (Image or File) or errors. However, it lacks details on rate limits, authentication requirements beyond credentials, error handling specifics, or whether the operation is idempotent or has side effects. For a tool with 15 parameters and no annotations, this is a moderate but incomplete disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It starts with a clear purpose statement, followed by key usage notes, and then lists parameters and returns in a formatted way. Every sentence adds value, though the parameter listing is somewhat redundant given the schema. It could be more front-loaded by emphasizing critical constraints earlier, but overall it's efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (15 parameters, no annotations, no output schema), the description is moderately complete. It covers the purpose, key constraints (base64 encoding, API interaction), and lists parameters and return types. However, it lacks details on error conditions, performance considerations, or examples of common use cases. For a tool with many optional parameters and external dependencies, more contextual guidance would be helpful to ensure correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning all parameters are documented in the schema. The description adds minimal value beyond the schema: it briefly mentions that geometry parameters are for resizing/cropping and that alpha/bgColor control transparency for raster outputs. However, it doesn't explain parameter interactions, dependencies, or provide examples. With high schema coverage, the baseline is 3, and the description doesn't significantly enhance understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Convert an uploaded document payload with TweekIT.' It specifies the verb (convert) and resource (document payload) and mentions the service (TweekIT). However, it doesn't explicitly differentiate from sibling tools like 'convert_url' or 'doctype', which likely handle similar conversion tasks via different methods.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some implied usage guidance: it mentions the file must be base64 encoded and references the 'blob' parameter, and it notes that conversion can be resized/cropped with optional geometry parameters. However, it doesn't explicitly state when to use this tool versus alternatives like 'convert_url' (which might handle URLs instead of uploaded files) or 'doctype' (which might detect file types). No explicit when-not-to-use or prerequisite information is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
convert_urlAInspect
Download a remote file and convert it with TweekIT in one step.
This helper first fetches url, infers the input extension when possible,
and then forwards the bytes to convert. Supply fetchHeaders when the
remote resource needs authentication or custom headers.
Args:
url: Direct download URL for the source document or image.
outfmt: Desired output format (Fmt).
apiKey: TweekIT API key (ApiKey header). Falls back to TWEEKIT_API_KEY env var.
apiSecret: TweekIT API secret (ApiSecret header). Falls back to TWEEKIT_API_SECRET env var.
inext: Optional override for the source extension if it cannot be
detected from the URL or response headers.
noRasterize: Forwarded to TweekIT to skip rasterization when possible.
width: Optional pixel width to request in the output.
height: Optional pixel height to request in the output.
x1: Left crop coordinate in source pixels.
y1: Top crop coordinate in source pixels.
x2: Right crop coordinate in source pixels.
y2: Bottom crop coordinate in source pixels.
page: Page number to extract for multipage inputs.
alpha: Whether the output should preserve alpha transparency.
bgColor: Background color to composite behind transparent pixels.
fetchHeaders: Optional mapping of HTTP headers to include when fetching.
Returns:
A FastMCP Image or File payload, or an error description.
| Name | Required | Description | Default |
|---|---|---|---|
| x1 | No | Left crop coordinate in source pixels. | |
| x2 | No | Right crop coordinate in source pixels. | |
| y1 | No | Top crop coordinate in source pixels. | |
| y2 | No | Bottom crop coordinate in source pixels. | |
| url | Yes | Direct download URL for the source document or image. | |
| page | No | Page number to convert for multi-page inputs. | |
| alpha | No | Preserve alpha transparency when producing raster formats. | |
| inext | No | Override for the detected input extension (e.g., pdf). | |
| width | No | Optional pixel width for the converted output. | |
| apiKey | No | TweekIT API key passed via the ApiKey header. Defaults to the TWEEKIT_API_KEY environment variable when omitted. | |
| height | No | Optional pixel height for the converted output. | |
| outfmt | Yes | Requested output format to send as Fmt. | |
| bgColor | No | Background color (hex RGB) to composite behind transparent pixels. | |
| apiSecret | No | TweekIT API secret paired with the apiKey. Defaults to the TWEEKIT_API_SECRET environment variable when omitted. | |
| noRasterize | No | Forward to TweekIT to disable rasterization when supported. | |
| fetchHeaders | No | Optional HTTP headers to include when downloading the URL. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behavioral traits: the two-step process (fetch then convert), authentication fallbacks to environment variables, and the return type (FastMCP payload or error). However, it lacks details on rate limits, error conditions beyond generic 'error description', or performance characteristics that would help an agent anticipate behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured and front-loaded: the first sentence captures the core purpose, followed by essential behavioral context, then a clean parameter list with helpful annotations, and finally the return value. Every sentence earns its place with zero wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 16-parameter tool with no annotations and no output schema, the description does well: it explains the workflow, covers authentication, documents all parameters with extra context, and specifies return types. The main gap is lack of detailed error handling or performance expectations, preventing a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context beyond the schema: it explains the tool's internal workflow (infers extension, forwards to convert), clarifies parameter relationships (e.g., 'fetchHeaders' for authentication), and provides practical usage guidance (e.g., 'when the remote resource needs authentication'). This elevates the score above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('download', 'convert') and resources ('remote file', 'TweekIT'), distinguishing it from siblings like 'convert' (which likely converts local files) and 'fetch' (which only downloads). The opening sentence precisely defines the two-step operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (for remote files needing conversion) and mentions 'fetchHeaders' for authentication needs. However, it doesn't explicitly state when to use alternatives like 'convert' (for local files) or 'fetch' (for download-only), leaving some sibling differentiation implicit rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
doctypeAInspect
Retrieve a list of supported file formats or map a file extension to its document type.
Args: apiKey (str): The API key for authentication. Falls back to TWEEKIT_API_KEY when omitted. apiSecret (str): The API secret for authentication. Falls back to TWEEKIT_API_SECRET when omitted. extension (str): The file extension to query (e.g., 'jpg', 'pdf') or leave off or use '*' to return all supported input formats.
Returns: Dict[str, Any]: A dictionary containing the supported file formats or an error message.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | TweekIT API key passed via the ApiKey header. Defaults to the TWEEKIT_API_KEY environment variable when omitted. | |
| apiSecret | No | TweekIT API secret paired with the apiKey. Defaults to the TWEEKIT_API_SECRET environment variable when omitted. | |
| extension | No | File extension to inspect; use '*' to list all supported inputs. | * |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses authentication behavior (API key/secret with fallbacks) and the dual return behavior (list vs. mapping), which is valuable. However, it doesn't mention rate limits, error conditions beyond 'error message', or whether this is a read-only operation (though 'Retrieve' suggests it is).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear purpose statement followed by Args and Returns sections. Every sentence adds value, though the authentication parameter explanations slightly duplicate schema information. The front-loaded purpose statement efficiently communicates core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with full schema coverage and an output schema present, the description provides adequate context. It explains the tool's dual modes, authentication fallbacks, and parameter usage. The output schema means return values don't need description. However, more behavioral context (like rate limits or error specifics) would improve completeness for a tool with no annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by clarifying the extension parameter's special behavior ('*' for all formats) and providing concrete examples ('jpg', 'pdf'), which enhances understanding beyond the schema's technical specification. However, it doesn't elaborate on the authentication parameters beyond what the schema already states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Retrieve a list', 'map a file extension') and resources ('supported file formats', 'document type'). It distinguishes itself from sibling tools like 'convert' or 'search' by focusing on format identification rather than transformation or content lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the extension parameter explanation ('leave off or use '*' to return all'), but provides no explicit guidance on when to choose this tool over alternatives like 'fetch' or 'search'. It mentions the tool's dual functionality but doesn't specify scenarios where one mode is preferred over the other.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetchBInspect
Fetch a URL and return content.
Images return as FastMCP Image.
PDFs return as File(format="pdf").
Text/JSON return as a JSON payload with metadata and text.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | HTTP or HTTPS URL to retrieve and normalize. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about return types for different content (images as FastMCP Image, PDFs as File, text/JSON as JSON payload). However, it doesn't mention important behavioral aspects like error handling, timeout behavior, authentication requirements, rate limits, or whether it follows redirects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and well-structured. The first sentence states the core purpose, followed by three bullet points that efficiently describe different return behaviors. Every sentence earns its place with zero wasted words, making it easy to scan and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with 100% schema coverage but no annotations or output schema, the description provides adequate but incomplete context. It covers return type variations well, but lacks information about error conditions, performance characteristics, or integration patterns. The absence of an output schema means the description should ideally provide more detail about response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the single 'url' parameter well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate since the schema does the heavy lifting for parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Fetch a URL and return content' - a specific verb (fetch) and resource (URL content). It distinguishes from siblings like 'convert' or 'search' by focusing on retrieval rather than transformation or querying. However, it doesn't explicitly differentiate from 'convert_url' which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'convert_url' and 'search' available, there's no indication of when fetch is preferred over those tools or what specific use cases it addresses. The description only states what it does, not when to choose it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchAInspect
Simple web search using DuckDuckGo HTML endpoint.
Returns a list of {title, url, snippet} objects. Best‑effort parsing.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search keywords to send to DuckDuckGo. | |
| max_results | No | Maximum number of results to return (1-10). |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that it uses DuckDuckGo's HTML endpoint and returns parsed results with 'Best-effort parsing,' indicating potential reliability issues. However, it lacks details on rate limits, authentication needs, or error handling, leaving behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded, with only three sentences that efficiently convey the tool's purpose, output format, and a key behavioral note ('Best-effort parsing'). Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (web search with parsing), no annotations, and an output schema (implied by 'Returns a list of {title, url, snippet} objects'), the description is mostly complete. It covers the action, data source, output structure, and a reliability disclaimer, though it could benefit from more behavioral context like error cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents both parameters (query and max_results). The description does not add any additional meaning or context beyond what the schema provides, such as query formatting tips or result quality insights, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Simple web search using DuckDuckGo HTML endpoint.' It specifies the exact action (web search), the resource (DuckDuckGo), and the method (HTML endpoint), distinguishing it from sibling tools like convert, convert_url, doctype, and fetch which perform different functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for web searches but does not explicitly state when to use this tool versus alternatives. It mentions 'Best-effort parsing,' which suggests limitations, but no guidance on when to choose this over other search methods or sibling tools is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!