File upload operations. Chunked uploads via POST /blob sidecar (create session, POST raw binary to /blob, upload chunks with blob_id, finalize), streaming uploads (single-call `stream-upload` that creates a session and streams in one shot — auto-finalizes), web URL imports, batch uploads (many small files in one round-trip via `batch`), and upload configuration. Side effects: finalize/stream/stream-upload/batch create new files that consume storage credits.
UPLOAD STRATEGY (default to single-file paths): 1) For files with a URL: use `web-import` (single call). 2) For files with unknown size (generated/piped content): use `stream-upload` — one call creates the session and streams the bytes (auto-finalizes). 3) For files with known size: create-session → POST to /blob → chunk with blob_id → finalize. The POST /blob sidecar is the canonical large-file path — bypasses MCP transport limits, no base64 overhead, up to 100 MB. Specialized: `batch` is for the specific case of uploading multiple small files (≤4 MB each, ≤200 per call, ≤100 MB total) in one round-trip — useful for AI-output bundles, exported CSVs, receipts. Don't reach for `batch` for one-off single uploads or streamed content; use the single-file paths above.
BINARY CONTENT IN-BAND: `content` is **text-only** — it is stored verbatim as UTF-8 bytes, so passing a base64 string there will write a base64-encoded text file (NOT the decoded binary). For sandboxed agents that can produce base64 in JSON but cannot reach POST /blob, use `content_base64` on `chunk`/`stream`/`stream-upload`/`batch` entries — the server decodes it before writing. Practical cap is bounded by the MCP transport message size (a few MB); for larger binary files, POST /blob + `blob_id` is still the right path.
STREAM MODE: When you don't know the file size upfront, prefer the consolidated `stream-upload` action — it accepts profile/parent/filename plus one of content|content_base64|blob_id and handles create-session + stream + auto-finalize internally. The lower-level `create-session` (with stream=true) + `stream` pair is still supported for cases where you need the session ID between calls.
MAX_SIZE GUIDANCE: `max_size` is a ceiling on the stream body — exceeding it aborts the upload mid-transfer. **Always overestimate, never undershoot.** There is no penalty for setting it higher than you need. Safest default: omit `max_size` entirely and the server uses your plan's file-size limit. Note: streaming uploads via MCP are also bounded by the `POST /blob` sidecar (100 MB cap per blob) — for larger files, use the chunked flow (`create-session` → `chunk` → `finalize`) instead, and call `upload` action `limits` first to confirm your plan's max file size.
POST /blob SIDECAR: The MCP server exposes a `/blob` HTTP endpoint that accepts raw data (no base64, no MCP transport limit, up to 100 MB). The create-session response includes blob_upload with the endpoint URL, your session ID, and a ready-to-use curl command. Blobs expire after 5 minutes and are single-use.
OVERWRITE A SPECIFIC NODE: Pass `target_node_id` on create-session or stream-upload to deterministically overwrite a specific node (preserves node_id; new version created). This is the reliable way to update an existing file — don't delete+reupload. When target_node_id is set, parent_node_id is ignored and filename is optional — if omitted, the existing node's current filename is auto-resolved and reused (pass filename only when renaming).
BATCH MODE: `batch` uploads up to 200 small files to one target folder in a single round-trip. Hard limits: ≤200 files per call, ≤4 MB per file, ≤100 MB total resolved bytes. Any file exceeding 4 MB rejects the whole call — route those through the chunked flow (create-session → chunk → finalize). `batch` requires authentication (anonymous callers are rejected with HTTP 401, code 10011) — for unauthenticated public-receive/public-exchange share uploads, use the single-file `create-session` path instead. Input: `files[]` array where each entry has `filename` (required), one of `blob_id`/`content`/`content_base64` (required), and optional `relative_path` (trailing slash required; auto-normalized; no leading slash, no `.`/`..` segments) to place the file in a sub-folder. The batch endpoint is rate-limited in a bucket independent of single-file `/upload/` — on HTTP 429 the tool surfaces the `x-ve-limit-expires` UTC datetime in the error message along with the remaining `x-ve-limit-avail`/`x-ve-limit-max` quota. SHA-256 is computed client-side by default (set `include_hash: false` to skip). **Partial success is normal**: HTTP 200 with `count_errored > 0` is a successful response — inspect `results[]` per entry and retry only the errored ones. When every entry errors, `all_failed: true` is set on the response. **`node_id` is nullable on success**: workspaces with async storage return `status: "ok"` with `node_id: null` — the storage node is assigned later; this is SUCCESS, not failure. If the final node_id is required, poll `storage` action `list` with the target folder afterward.
Actions & required params:
- create-session: profile_type, profile_id, parent_node_id, filename, filesize (+ optional: chunk_size, stream, max_size, target_node_id). When stream=true, filesize is optional. When target_node_id is provided, parent_node_id is ignored and filename is optional (auto-resolved from the existing node).
- stream-upload: profile_type, profile_id, parent_node_id, filename, content | content_base64 | blob_id (exactly one) (+ optional: max_size, target_node_id, hash, hash_algo). Creates a stream session and uploads in one call. Auto-finalizes. When target_node_id is provided, parent_node_id is ignored and filename is optional (auto-resolved from the existing node).
- batch: profile_type, profile_id, files[] (1..200 items, each with filename + exactly one of blob_id|content|content_base64, optional relative_path/hash/hash_algo) (+ optional: folder_id, creator, include_hash).
- chunk: upload_id, chunk_number, content | content_base64 | blob_id (exactly one). Not allowed on stream sessions.
- stream: upload_id, content | content_base64 | blob_id (exactly one) (+ optional: hash, hash_algo). Only for stream sessions. Auto-finalizes. Prefer `stream-upload` unless you need the session ID between calls.
- finalize: upload_id. Not needed for stream sessions.
- status: upload_id (+ optional: wait)
- cancel: upload_id [DESTRUCTIVE]
- list-sessions: (none)
- cancel-all: (none) [DESTRUCTIVE]
- chunk-status: upload_id (+ optional: chunk_id)
- chunk-delete: upload_id, chunk_number [DESTRUCTIVE]
- web-import: profile_type, profile_id, parent_node_id, url (+ optional: filename)
- web-list: (+ optional: limit, offset, status)
- web-cancel: upload_id [DESTRUCTIVE]
- web-status: upload_id
- limits: (+ optional: action_context, instance_id, file_id, org)
- extensions: (+ optional: plan)
- blob-info: (none) — returns POST /blob endpoint URL, session ID, headers, curl example, and workflow for shell-based uploads