Skip to main content
Glama
TICnine

Autotask MCP Server

autotask_create_ticket_attachment

Upload file attachments to existing Autotask tickets by providing base64-encoded content, with validation for the 3 MB size limit.

Instructions

Upload a file attachment to an existing ticket. The file content must be passed as a base64-encoded string in the data field (MCP is JSON-RPC, so binary bytes must be base64-encoded). Autotask enforces a 3 MB hard limit on ticket attachments; this tool validates the decoded size before calling the API and returns a clear error if the limit is exceeded. Example: { ticketId: 12345, title: "screenshot.png", data: "iVBORw0KGgoAAAANSUhEUgAA..." }

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
ticketIdYesThe ticket ID to attach the file to
titleYesDisplay title for the attachment (typically the filename, e.g. "screenshot.png")
dataYesBase64-encoded file content. Maximum decoded size: 3 MB (Autotask ticket attachment limit). Example: read a file and pass its base64 representation here.
fullPathNoOriginal filename including any path. Defaults to `title` if not provided.
contentTypeNoMIME type of the file (e.g. "image/png", "application/pdf"). Optional.
publishNoVisibility: 1 = All Autotask Users (default), 2 = Internal Users Only
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and adds valuable behavioral context: it discloses the 3 MB file size limit, validation before API call, and clear error handling. It also notes the base64 encoding requirement for JSON-RPC, though it doesn't cover other aspects like permissions or response format, keeping it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by essential technical details (encoding, size limit, validation) and a clear example. Every sentence adds value without waste, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with file upload) and no annotations or output schema, the description does well by covering key behaviors like size limits and encoding. However, it lacks details on permissions, error types beyond size, or what the response contains, leaving minor gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal extra meaning by mentioning the 'data' field requires base64 encoding and the 3 MB limit, but this is largely redundant with schema details. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Upload a file attachment') and target resource ('to an existing ticket'), distinguishing it from sibling tools like 'autotask_create_ticket' (creates tickets) or 'autotask_create_ticket_note' (adds notes). It precisely identifies the tool's function without ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying 'to an existing ticket,' suggesting it should be used after ticket creation, but does not explicitly state when to use this tool versus alternatives (e.g., 'autotask_create_ticket_note' for text notes) or mention prerequisites like ticket existence. Guidance is present but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TICnine/autotask-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server