get_attachment
Retrieve an attachment by its ID. Returns image content, decoded text, or metadata and base64 for binary files; large files return metadata only with a truncated flag.
Instructions
Download an attachment by id. Returns image content for image/* types (Claude can describe it natively); decoded text for text/* and application/json (small files); JSON metadata + base64 payload for other binary types (PDF, Office docs, etc.). Files exceeding maxSizeBytes (default 5MB) return metadata only with a truncated: true flag.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Attachment ID. | |
| maxSizeBytes | No | Refuse to return content over this size (default 5242880 bytes ≈ 5MB; max 26214400 bytes ≈ 25MB). Files exceeding the cap return metadata only with a 'truncated: true' flag. |
Implementation Reference
- src/tools/attachments.ts:60-76 (handler)Core handler function for get_attachment. Calls capsuleGetBinary to fetch the attachment from Capsule API, returns raw bytes + content type + size, with optional truncated flag if the file exceeds the size cap.
export async function getAttachment( input: z.infer<typeof getAttachmentSchema>, ): Promise<AttachmentResult> { const cap = input.maxSizeBytes ?? DEFAULT_MAX_SIZE_BYTES; // Push the cap into the HTTP layer so we never buffer more than `cap` // bytes into memory — a malicious or buggy upstream sending a 5 GB // response would be aborted mid-stream rather than fully buffered // first and rejected after. const { contentType, buffer, truncated, sizeBytes } = await capsuleGetBinary( `/attachments/${input.id}`, cap, ); if (truncated) { return { contentType, buffer: Buffer.alloc(0), truncated: true, sizeBytes }; } return { contentType, buffer, sizeBytes }; } - src/tools/attachments.ts:40-51 (schema)Zod schema for get_attachment input validation: id (number) and optional maxSizeBytes (number, max 25MB, default 5MB).
export const getAttachmentSchema = z.object({ id: z.number().int().positive().describe("Attachment ID."), maxSizeBytes: z .number() .int() .positive() .max(HARD_MAX_SIZE_BYTES) .optional() .describe( `Refuse to return content over this size (default ${DEFAULT_MAX_SIZE_BYTES} bytes ≈ 5MB; max ${HARD_MAX_SIZE_BYTES} bytes ≈ 25MB). Files exceeding the cap return metadata only with a 'truncated: true' flag.`, ), }); - src/server.ts:705-801 (registration)Registration of the get_attachment tool using raw server.tool() call (not registerTool helper) because its handler does content-type-aware response shaping for image vs text vs binary content.
server.tool( "get_attachment", "Download an attachment by id. Returns image content for image/* types (Claude can describe it natively); decoded text for text/* and application/json (small files); JSON metadata + base64 payload for other binary types (PDF, Office docs, etc.). Files exceeding maxSizeBytes (default 5MB) return metadata only with a `truncated: true` flag.", getAttachmentSchema.shape, async (input) => { const result = await getAttachment(input); // Truncated: return metadata only. if (result.truncated) { return { content: [ { type: "text", text: JSON.stringify( { id: input.id, contentType: result.contentType, sizeBytes: result.sizeBytes, truncated: true, message: `File exceeds the size cap (${input.maxSizeBytes ?? "default"} bytes). Increase maxSizeBytes if you need the bytes; max is 25MB.`, }, null, 2, ), }, ], }; } // Strip any Content-Type parameters (e.g. "; charset=UTF-8") before // comparing — Capsule routinely returns "image/png; charset=UTF-8" // and "application/json; charset=UTF-8". Without this, the // application/json branch would miss the charset variant and the // file would fall through to the binary base64 branch, which Claude // can't read directly. const baseType = result.contentType.split(";")[0]!.trim().toLowerCase(); // Image: return as MCP image content so Claude can see it. if (baseType.startsWith("image/")) { return { content: [ { type: "image", data: result.buffer.toString("base64"), mimeType: result.contentType, }, ], }; } // Text-ish: decode as UTF-8 alongside metadata. const isText = baseType.startsWith("text/") || baseType === "application/json" || baseType === "application/xml"; if (isText) { return { content: [ { type: "text", text: JSON.stringify( { id: input.id, contentType: result.contentType, sizeBytes: result.sizeBytes, }, null, 2, ), }, { type: "text", text: result.buffer.toString("utf8") }, ], }; } // Other binary (PDF, Office, archive): metadata + base64 payload // for downstream tools (Claude itself can't read PDF bytes // directly, but can pass the base64 to other tools). return { content: [ { type: "text", text: JSON.stringify( { id: input.id, contentType: result.contentType, sizeBytes: result.sizeBytes, base64: result.buffer.toString("base64"), }, null, 2, ), }, ], }; }, ); - src/capsule/client.ts:465-526 (helper)capsuleGetBinary helper function that performs the HTTP GET to download attachment bytes, with streaming size cap enforcement (pre-buffer via Content-Length and mid-stream abort).
export async function capsuleGetBinary(path: string, maxBytes?: number): Promise<BinaryResult> { const token = getToken(); const url = buildUrl(path); const { res, cleanup } = await doFetch(url, { headers: baseHeaders(token) }); try { await throwForStatus(res); const contentType = res.headers.get("Content-Type") ?? "application/octet-stream"; // Pre-buffer cap check. If Content-Length is advertised and exceeds // the cap, refuse to read the body at all. const declared = res.headers.get("Content-Length"); const declaredBytes = declared ? Number(declared) : NaN; if (maxBytes !== undefined && Number.isFinite(declaredBytes) && declaredBytes > maxBytes) { // Drain (cancel) the body so the connection can be released. if (res.body) await res.body.cancel().catch(() => {}); return { contentType, buffer: Buffer.alloc(0), truncated: true, sizeBytes: declaredBytes, }; } // Streaming cap check. Even when Content-Length is absent or honest, // abort the read once we've accumulated more than the cap. The // per-chunk read is wrapped in mapAbort so a mid-stream timeout // surfaces as the same clean 504 a fetch-stage timeout does. if (maxBytes !== undefined && res.body) { const reader = res.body.getReader(); const chunks: Uint8Array[] = []; let total = 0; let truncated = false; while (true) { const { done, value } = await mapAbort(reader.read()); if (done) break; total += value.byteLength; if (total > maxBytes) { truncated = true; await reader.cancel().catch(() => {}); break; } chunks.push(value); } if (truncated) { return { contentType, buffer: Buffer.alloc(0), truncated: true, sizeBytes: total, }; } const buffer = Buffer.concat(chunks.map((c) => Buffer.from(c))); return { contentType, buffer, sizeBytes: buffer.length }; } const arrayBuffer = await mapAbort(res.arrayBuffer()); const buffer = Buffer.from(arrayBuffer); return { contentType, buffer, sizeBytes: buffer.length }; } finally { cleanup(); } } - src/tools/attachments.ts:53-58 (helper)AttachmentResult interface returned by getAttachment handler.
export interface AttachmentResult { contentType: string; buffer: Buffer; truncated?: boolean; sizeBytes: number; }