Skip to main content
Glama
soil-dev

capsulemcp

get_attachment

Retrieve an attachment by its ID. Returns image content, decoded text, or metadata and base64 for binary files; large files return metadata only with a truncated flag.

Instructions

Download an attachment by id. Returns image content for image/* types (Claude can describe it natively); decoded text for text/* and application/json (small files); JSON metadata + base64 payload for other binary types (PDF, Office docs, etc.). Files exceeding maxSizeBytes (default 5MB) return metadata only with a truncated: true flag.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
idYesAttachment ID.
maxSizeBytesNoRefuse to return content over this size (default 5242880 bytes ≈ 5MB; max 26214400 bytes ≈ 25MB). Files exceeding the cap return metadata only with a 'truncated: true' flag.

Implementation Reference

  • Core handler function for get_attachment. Calls capsuleGetBinary to fetch the attachment from Capsule API, returns raw bytes + content type + size, with optional truncated flag if the file exceeds the size cap.
    export async function getAttachment(
      input: z.infer<typeof getAttachmentSchema>,
    ): Promise<AttachmentResult> {
      const cap = input.maxSizeBytes ?? DEFAULT_MAX_SIZE_BYTES;
      // Push the cap into the HTTP layer so we never buffer more than `cap`
      // bytes into memory — a malicious or buggy upstream sending a 5 GB
      // response would be aborted mid-stream rather than fully buffered
      // first and rejected after.
      const { contentType, buffer, truncated, sizeBytes } = await capsuleGetBinary(
        `/attachments/${input.id}`,
        cap,
      );
      if (truncated) {
        return { contentType, buffer: Buffer.alloc(0), truncated: true, sizeBytes };
      }
      return { contentType, buffer, sizeBytes };
    }
  • Zod schema for get_attachment input validation: id (number) and optional maxSizeBytes (number, max 25MB, default 5MB).
    export const getAttachmentSchema = z.object({
      id: z.number().int().positive().describe("Attachment ID."),
      maxSizeBytes: z
        .number()
        .int()
        .positive()
        .max(HARD_MAX_SIZE_BYTES)
        .optional()
        .describe(
          `Refuse to return content over this size (default ${DEFAULT_MAX_SIZE_BYTES} bytes ≈ 5MB; max ${HARD_MAX_SIZE_BYTES} bytes ≈ 25MB). Files exceeding the cap return metadata only with a 'truncated: true' flag.`,
        ),
    });
  • src/server.ts:705-801 (registration)
    Registration of the get_attachment tool using raw server.tool() call (not registerTool helper) because its handler does content-type-aware response shaping for image vs text vs binary content.
    server.tool(
      "get_attachment",
      "Download an attachment by id. Returns image content for image/* types (Claude can describe it natively); decoded text for text/* and application/json (small files); JSON metadata + base64 payload for other binary types (PDF, Office docs, etc.). Files exceeding maxSizeBytes (default 5MB) return metadata only with a `truncated: true` flag.",
      getAttachmentSchema.shape,
      async (input) => {
        const result = await getAttachment(input);
    
        // Truncated: return metadata only.
        if (result.truncated) {
          return {
            content: [
              {
                type: "text",
                text: JSON.stringify(
                  {
                    id: input.id,
                    contentType: result.contentType,
                    sizeBytes: result.sizeBytes,
                    truncated: true,
                    message: `File exceeds the size cap (${input.maxSizeBytes ?? "default"} bytes). Increase maxSizeBytes if you need the bytes; max is 25MB.`,
                  },
                  null,
                  2,
                ),
              },
            ],
          };
        }
    
        // Strip any Content-Type parameters (e.g. "; charset=UTF-8") before
        // comparing — Capsule routinely returns "image/png; charset=UTF-8"
        // and "application/json; charset=UTF-8". Without this, the
        // application/json branch would miss the charset variant and the
        // file would fall through to the binary base64 branch, which Claude
        // can't read directly.
        const baseType = result.contentType.split(";")[0]!.trim().toLowerCase();
    
        // Image: return as MCP image content so Claude can see it.
        if (baseType.startsWith("image/")) {
          return {
            content: [
              {
                type: "image",
                data: result.buffer.toString("base64"),
                mimeType: result.contentType,
              },
            ],
          };
        }
    
        // Text-ish: decode as UTF-8 alongside metadata.
        const isText =
          baseType.startsWith("text/") ||
          baseType === "application/json" ||
          baseType === "application/xml";
        if (isText) {
          return {
            content: [
              {
                type: "text",
                text: JSON.stringify(
                  {
                    id: input.id,
                    contentType: result.contentType,
                    sizeBytes: result.sizeBytes,
                  },
                  null,
                  2,
                ),
              },
              { type: "text", text: result.buffer.toString("utf8") },
            ],
          };
        }
    
        // Other binary (PDF, Office, archive): metadata + base64 payload
        // for downstream tools (Claude itself can't read PDF bytes
        // directly, but can pass the base64 to other tools).
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify(
                {
                  id: input.id,
                  contentType: result.contentType,
                  sizeBytes: result.sizeBytes,
                  base64: result.buffer.toString("base64"),
                },
                null,
                2,
              ),
            },
          ],
        };
      },
    );
  • capsuleGetBinary helper function that performs the HTTP GET to download attachment bytes, with streaming size cap enforcement (pre-buffer via Content-Length and mid-stream abort).
    export async function capsuleGetBinary(path: string, maxBytes?: number): Promise<BinaryResult> {
      const token = getToken();
      const url = buildUrl(path);
      const { res, cleanup } = await doFetch(url, { headers: baseHeaders(token) });
      try {
        await throwForStatus(res);
        const contentType = res.headers.get("Content-Type") ?? "application/octet-stream";
    
        // Pre-buffer cap check. If Content-Length is advertised and exceeds
        // the cap, refuse to read the body at all.
        const declared = res.headers.get("Content-Length");
        const declaredBytes = declared ? Number(declared) : NaN;
        if (maxBytes !== undefined && Number.isFinite(declaredBytes) && declaredBytes > maxBytes) {
          // Drain (cancel) the body so the connection can be released.
          if (res.body) await res.body.cancel().catch(() => {});
          return {
            contentType,
            buffer: Buffer.alloc(0),
            truncated: true,
            sizeBytes: declaredBytes,
          };
        }
    
        // Streaming cap check. Even when Content-Length is absent or honest,
        // abort the read once we've accumulated more than the cap. The
        // per-chunk read is wrapped in mapAbort so a mid-stream timeout
        // surfaces as the same clean 504 a fetch-stage timeout does.
        if (maxBytes !== undefined && res.body) {
          const reader = res.body.getReader();
          const chunks: Uint8Array[] = [];
          let total = 0;
          let truncated = false;
          while (true) {
            const { done, value } = await mapAbort(reader.read());
            if (done) break;
            total += value.byteLength;
            if (total > maxBytes) {
              truncated = true;
              await reader.cancel().catch(() => {});
              break;
            }
            chunks.push(value);
          }
          if (truncated) {
            return {
              contentType,
              buffer: Buffer.alloc(0),
              truncated: true,
              sizeBytes: total,
            };
          }
          const buffer = Buffer.concat(chunks.map((c) => Buffer.from(c)));
          return { contentType, buffer, sizeBytes: buffer.length };
        }
    
        const arrayBuffer = await mapAbort(res.arrayBuffer());
        const buffer = Buffer.from(arrayBuffer);
        return { contentType, buffer, sizeBytes: buffer.length };
      } finally {
        cleanup();
      }
    }
  • AttachmentResult interface returned by getAttachment handler.
    export interface AttachmentResult {
      contentType: string;
      buffer: Buffer;
      truncated?: boolean;
      sizeBytes: number;
    }
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: returns image content for image/*, decoded text for text/* and application/json, and JSON metadata with base64 for other binary types. Also explains truncation behavior with maxSizeBytes and default caps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with main purpose, no wasted words. Efficiently conveys complex behavior (multiple response types, truncation) in a compact structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and moderate complexity (multiple content type handlers, size limit behavior), the description is thoroughly complete. It covers all major aspects an agent needs to know to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage (both parameters described). The description adds value by explaining how maxSizeBytes affects response types (metadata-only when truncated) and default/max values, though schema already includes some of this. Enriches understanding beyond bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Download an attachment by id' and distinguishes from sibling upload_attachment. It specifies the resource (attachment) and action (download), and provides distinct handling for different content types, making the tool's purpose very clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for downloading attachments but does not explicitly state when to use this tool versus alternatives (e.g., get_entry for text content). No exclusions or when-not-to-use guidance provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/soil-dev/capsulemcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server