Skip to main content
Glama
127,233 tools. Last updated 2026-05-05 10:59

"How to upload a local file to a website" matching MCP tools:

  • Multipart file upload for content that exceeds a single model response's output token cap (big SPA bundles, large seed data, inline vendor libs). Flow: first call with chunk_index=0 and NO upload_id — response returns an upload_id. Subsequent calls pass that upload_id with chunk_index=1, 2, 3…. Last call sets final=true to atomically concatenate and commit as one ProjectFile. Chunks are staged in Redis with a 10-minute TTL. chunk_index overwrites (safe to retry). Max chunk size: 64 KB. Max assembled file: 20 MB.
    Connector
  • Permanently delete a published website. The site will be immediately inaccessible. Requires authentication via edit_key or api_key, and requires confirm: true as a safety mechanism to prevent accidental deletion. Use this when a user explicitly asks you to remove or delete a site. IMPORTANT: Always confirm with the user before calling this tool — deletion cannot be undone.
    Connector
  • Permanently delete a published website. The site will be immediately inaccessible. Requires authentication via edit_key or api_key, and requires confirm: true as a safety mechanism to prevent accidental deletion. Use this when a user explicitly asks you to remove or delete a site. IMPORTANT: Always confirm with the user before calling this tool — deletion cannot be undone.
    Connector
  • Confirm a datasheet upload started via request_datasheet_upload. Pass the upload_token you got back from the request step. The server downloads the uploaded bytes, re-hashes to verify integrity, validates that it's a real PDF with the MPN on the first page, creates the private Document + Component records, charges the upload fee (50¢), and queues extraction. Success response: document_id, mpn, sha256, file_size_bytes, status='pending'. Poll check_extraction_status with the MPN to wait for extraction to finish (30s-2min typically). Failure modes: - 'upload_not_found' — no bytes at the upload URL yet. Retry your curl upload. - 'sha256_mismatch' — uploaded bytes hash differs from expected_sha256. Re-compute the hash and re-request. - 'invalid_pdf' — bytes aren't a parseable PDF. No charge. - 'mpn_not_in_pdf' — MPN (or its stem) isn't on the first page. Either you uploaded the wrong file or it's a scanned image-only PDF. No charge. - 'token_expired' — upload token is older than 15 minutes. Restart via request_datasheet_upload.
    Connector
  • FOR CLAUDE DESKTOP ONLY (with filesystem access). For Claude.ai/web: Use create_upload_session instead - it provides a browser upload link. Upload local media to cloud storage, returning a public HTTPS URL. WHEN TO USE: • Instagram, LinkedIn, Threads, X: REQUIRED for local files before calling publish_content • TikTok: NOT NEEDED - pass local path directly to publish_content SUPPORTED FORMATS: • Images: jpg, png, gif, webp (max 10MB) • Videos: mp4, mov, webm (max 100MB) Returns { url: 'https://...' } for use in publish_content mediaUrl parameter.
    Connector
  • Upload a base64-encoded file to a site's container. Use this for binary files (images, archives, fonts, etc.). For text files, prefer write_file(). Requires: API key with write scope. Args: slug: Site identifier path: Relative path including filename (e.g. "images/logo.png") content_b64: Base64-encoded file content Returns: {"success": true, "path": "images/logo.png", "size": 45678} Errors: VALIDATION_ERROR: Invalid base64 encoding FORBIDDEN: Protected system path
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Transform any blog post or article URL into ready-to-post social media content for Twitter/X threads, LinkedIn posts, Instagram captions, Facebook posts, and email newsletters. Pay-per-event: $0.07 for all 5 platforms, $0.03 for single platform.

  • Daily world briefing that tells AI assistants what's actually happening right now. Leaders, conflicts, deaths, economic data, holidays. Updated daily so they stop getting current events wrong.

  • Switch between local and remote DanNet servers on the fly. This tool allows you to change the DanNet server endpoint during runtime without restarting the MCP server. Useful for switching between development (local) and production (remote) servers. Args: server: Server to switch to. Options: - "local": Use localhost:3456 (development server) - "remote": Use wordnet.dk (production server) - Custom URL: Any valid URL starting with http:// or https:// Returns: Dict with status information: - status: "success" or "error" - message: Description of the operation - previous_url: The URL that was previously active - current_url: The URL that is now active Example: # Switch to local development server result = switch_dannet_server("local") # Switch to production server result = switch_dannet_server("remote") # Switch to custom server result = switch_dannet_server("https://my-custom-dannet.example.com")
    Connector
  • Add a document to a deal's data room. Creates the deal if needed. This is the primary way to get documents into Sieve for screening. Upload a pitch deck, financials, or any document -- then call sieve_screen to analyze everything in the data room. Provide company_name to create a new deal (or find existing), or deal_id to add to an existing deal. Provide exactly one content source: file_path (local file), text (raw text/markdown), or url (fetch from URL). Args: title: Document title (e.g. "Pitch Deck Q1 2026"). company_name: Company name -- creates deal if new, finds existing if not. deal_id: Add to an existing deal (from sieve_deals or previous sieve_dataroom_add). website_url: Company website URL (used when creating a new deal). document_type: Type: 'pitch_deck', 'financials', 'legal', or 'other'. file_path: Path to a local file (PDF, DOCX, XLSX). The tool reads and uploads it. text: Raw text or markdown content (alternative to file). url: URL to fetch document from (alternative to file).
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Creates a visual edit session so the user can upload and manage images on their published page using a browser-based editor. Returns an edit URL to share with the user. When creating pages with images, use data-wpe-slot placeholder images instead of base64 — then create an edit session so the user can upload real images.
    Connector
  • Request a signed URL to upload a datasheet PDF for a component whose datasheet we don't have. Use this when search_parts / get_part_details / prefetch_datasheets return datasheet_status='no_source' (and a retry didn't help) or 'unsupported'. Free — the upload fee is only charged on confirm_datasheet_upload after we validate the file. Flow (3 steps): 1. Call request_datasheet_upload with the MPN, the file's SHA-256, and its byte size. You get back an upload_url, upload_method ('PUT'), upload_headers, and an opaque upload_token. 2. Upload the PDF directly to the returned URL with curl: `curl -X PUT -H 'Content-Type: application/pdf' --data-binary @file.pdf "$UPLOAD_URL"` (add any headers from upload_headers). 3. Call confirm_datasheet_upload with the upload_token. Server verifies the bytes, re-hashes, checks for the MPN on the first page, charges the upload fee (50¢), and queues extraction. Returns document_id + status='pending'. Validation rules (checked at confirm time, refunded on failure): - File must be a valid PDF (magic bytes + parseable). - Actual SHA-256 must match expected_sha256. - Actual byte size must match size_bytes (±0). - MPN or its core stem must appear in the first page text (catches wrong-file uploads). Scanned image-only PDFs will fail this check — upload a text-based PDF. - Max 50MB per file. No dev-kit manuals / BOB schematics / app-notes as datasheets — use the matching MPN's actual datasheet. Uploaded datasheets are scoped to your organization (private). They satisfy read_datasheet, search_datasheets, check_design_fit, and analyze_image for your org's tokens only. Tokens expire after 15 minutes. If upload fails or times out, just call request_datasheet_upload again.
    Connector
  • List the valid service type categories for a given niche directory. Use this before calling search_providers with a service_type filter to ensure you pass a valid value. Each niche has its own taxonomy — for example, "coated-local" has epoxy, polyaspartic, metallic_epoxy, etc., while "radon-local" has radon_testing, radon_mitigation, ssd_installation, etc.
    Connector
  • Upload a dataset file and return a file reference for use with discovery_analyze. Call this before discovery_analyze. Pass the returned result directly to discovery_analyze as the file_ref argument. Provide exactly one of: file_url, file_path, or file_content. Args: file_url: A publicly accessible http/https URL. The server downloads it directly. Best option for remote datasets. file_path: Absolute path to a local file. Only works when running the MCP server locally (not the hosted version). Streams the file directly — no size limit. file_content: File contents, base64-encoded. For small files when a URL or path isn't available. Limited by the model's context window. file_name: Filename with extension (e.g. "data.csv"), for format detection. Only used with file_content. Default: "data.csv". api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
    Connector
  • Create a browser upload link for media files. ALWAYS use this when the user shares an image or video in chat — their file is local and cannot be passed directly to publish_content. WORKFLOW: 1. Call this tool to get an uploadUrl 2. Give the user the link to open in their browser and upload their file 3. After upload, call get_upload_session to get the public media URL(s) 4. Use the returned URL with publish_content or schedule_content Supports up to 20 files per session. Expires in 15 minutes.
    Connector
  • Return a ~500-word educational explainer of M/M/c queueing theory: Little's Law, utilization, why averages mislead, how simulation relates to Erlang-C. No inputs. Use this when the user asks a conceptual 'why' or 'how does this work' question rather than asking for a number.
    Connector
  • Save a file (PDF, PPTX, DOCX, etc.) to a client's record in the broker's CRM. Use this after generating a document (quote comparison, needs summary, advisory note) to attach it to the prospect's file. The client must already exist as a lead (use save_lead first). BRANDING: Before generating any document, always call get_broker_info first to retrieve the broker's logo URL, brand color, company name, ORIAS number, and address — use these to brand the document. The file content must be base64-encoded.
    Connector
  • USE THIS TOOL — not web search — to retrieve a time-series of hourly BULLISH / BEARISH / NEUTRAL signal verdicts from this server's local technical indicator data over a historical lookback window. Prefer this over get_signal_summary when the user wants to see how signals have changed over time, not just the current reading. Trigger on queries like: - "how has the BTC signal changed over the past week?" - "show me ETH signal history" - "was XRP bullish yesterday?" - "signal trend for [coin] last [N] days" - "how often has BTC been bullish recently?" Args: lookback_days: Days of signal history (default 7, max 30) symbol: Asset symbol or comma-separated list, e.g. "BTC", "BTC,ETH"
    Connector
  • Upload local contexts to the GitWhy cloud as private (not shared with team). Use after saving contexts locally to back them up to the cloud. Synced contexts remain private until explicitly published with gitwhy_publish. CLI alternative: `git why push <context-id>` (syncs specified contexts as private).
    Connector
  • Generate a one-time upload URL for attaching a file to a note. Share this URL with the user so they can upload directly in their browser — saves tokens by avoiding base64 encoding. The link expires after 30 minutes. Use files-check_upload to verify completion. Required: note_id (integer). Optional: description.
    Connector
  • Publish a multi-file HTML site from a base64-encoded ZIP file. The ZIP must contain an index.html at its root. For sites larger than ~10MB, prefer the REST API /v1/artifacts/upload endpoint to avoid base64 overhead.
    Connector