Skip to main content
Glama
134,827 tools. Last updated 2026-05-14 08:14

"A service for downloading files from MEGA cloud storage" matching MCP tools:

  • FOR CLAUDE DESKTOP ONLY (with filesystem access). For Claude.ai/web: Use create_upload_session instead - it provides a browser upload link. Upload local media to cloud storage, returning a public HTTPS URL. WHEN TO USE: • Instagram, LinkedIn, Threads, X: REQUIRED for local files before calling publish_content • TikTok: NOT NEEDED - pass local path directly to publish_content SUPPORTED FORMATS: • Images: jpg, png, gif, webp (max 10MB) • Videos: mp4, mov, webm (max 100MB) Returns { url: 'https://...' } for use in publish_content mediaUrl parameter.
    Connector
  • DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool. # delete_file ## When to use DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool. ## Parameters to validate before calling - file_token (string, required) — The file token (UUID) of the file to delete. Get via fetch_files. ## Notes - DESTRUCTIVE — IRREVERSIBLE. Always confirm with the user before calling. Explain what will be lost.
    Connector
  • List all available Pine Script v6 documentation files with descriptions. Returns files organised by category with descriptions. For small files use get_doc(path). For large files (ta.md, strategy.md, collections.md, drawing.md, general.md) use list_sections(path) then get_section(path, header).
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • INSPECTION: Inspect GCP infrastructure for a deployed project ⚠️ **PREREQUISITE**: This tool requires a prior deployment ATTEMPT (successful or failed). Check convostatus for hasDeployAttempt=true before calling. Works even after failed deploys to inspect orphaned resources. Inspect deployed GCP resources after a deployment attempt. Use this tool when the user asks about the status or details of their deployed GCP infrastructure. It fetches temporary read-only credentials securely and queries the GCP API directly. RESPONSE TIERS (default is summary for token efficiency): - Summary (default): Key fields only (~500 tokens). Set detail=false, raw=false or omit both. - Detail: Full metadata for a specific resource. Set detail=true + resource filter. - Raw: Complete unprocessed API response. Set raw=true. REQUIRES: session_id from convoopen response (format: sess_v2_...). Supported services: apigateway, bastion, billing, cloudarmor, cloudbuild, cloudcdn, cloudfunctions, cloudkms, cloudlogging, cloudmonitoring, cloudrun, cloudsql, compute, firestore, gcs, gke, identityplatform, loadbalancer, memorystore, pubsub, secretmanager, vertexai, vpc For a specific service's actions, call with action="list-actions". METRICS: Use list-metrics to see available Cloud Monitoring metrics for any service (no credentials needed — progressive disclosure). Use get-metrics to retrieve time-series data. Optional filters JSON: {"hours":6,"period":300}. Label breakdowns: Cloud Functions (by status), Load Balancer/API Gateway (by response_code_class), Cloud CDN (by cache_result). Secret Manager get-metrics returns operational health (version count, replication, create time) — no time-series. Bastion is an alias for Compute Engine metrics (SSH connection count not available as a GCP metric). BILLING: Use service=billing to inspect GCP billing. Actions: get-billing-info (check if billing enabled, which billing account), get-budgets (list budget alerts for the project — auto-fetches billing account). Requires roles/billing.viewer IAM role. Required IAM roles: Monitoring Viewer (roles/monitoring.viewer) for metrics, Secret Manager Viewer (roles/secretmanager.viewer) for secret health, Billing Viewer (roles/billing.viewer) for billing. EXAMPLES: - gcpinspect(session_id=..., service="compute", action="list-instances") - gcpinspect(session_id=..., service="gke", action="list-clusters") - gcpinspect(session_id=..., service="cloudsql", action="get-metrics", filters="{\"hours\":6}") - gcpinspect(session_id=..., service="billing", action="get-billing-info")
    Connector
  • Wait for the user to securely connect their cloud account and subscribe to Luther Systems. Polls until credentials appear on the session. 🎯 USE THIS TOOL WHEN: tfdeploy returns an 'auth_required', 'no_credentials', or 'credentials_expired' error. The user needs to visit the connect URL to: 1. Connect their cloud credentials (AWS or GCP) 2. Sign up and subscribe to a Luther Systems plan (required for deployment) This secure connection allows InsideOut to deploy and manage infrastructure in the user's cloud account on their behalf. Credentials are handled securely and only used for deployment and management sessions. WORKFLOW: 1. FIRST: Present the connect URL and explanation to the user (from the tfdeploy error response) 2. THEN: Call this tool to begin polling for credentials 3. The user opens the URL in their browser to subscribe and add credentials 4. When credentials are found, inform the user and call tfdeploy to deploy IMPORTANT: Do NOT call this tool without first showing the connect URL to the user. The user needs to see the URL to complete the process. REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: cloud ('aws' or 'gcp'), timeout (integer, seconds to wait, default 300, max 600).
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Encrypted A2A object storage for autonomous agent state and artifacts

  • AI-to-AI petrol station. 56 pay-per-call endpoints covering market signals, crypto/DeFi, geopolitics, earnings, insider trades, SEC filings, sanctions screening, ArXiv research, whale tracking, and more. Micropayments in USDC on Base Mainnet via x402 protocol.

  • INSPECTION: Inspect GCP infrastructure for a deployed project ⚠️ **PREREQUISITE**: This tool requires a prior deployment ATTEMPT (successful or failed). Check convostatus for hasDeployAttempt=true before calling. Works even after failed deploys to inspect orphaned resources. Inspect deployed GCP resources after a deployment attempt. Use this tool when the user asks about the status or details of their deployed GCP infrastructure. It fetches temporary read-only credentials securely and queries the GCP API directly. RESPONSE TIERS (default is summary for token efficiency): - Summary (default): Key fields only (~500 tokens). Set detail=false, raw=false or omit both. - Detail: Full metadata for a specific resource. Set detail=true + resource filter. - Raw: Complete unprocessed API response. Set raw=true. REQUIRES: session_id from convoopen response (format: sess_v2_...). Supported services: apigateway, bastion, billing, cloudarmor, cloudbuild, cloudcdn, cloudfunctions, cloudkms, cloudlogging, cloudmonitoring, cloudrun, cloudsql, compute, firestore, gcs, gke, identityplatform, loadbalancer, memorystore, pubsub, secretmanager, vertexai, vpc For a specific service's actions, call with action="list-actions". METRICS: Use list-metrics to see available Cloud Monitoring metrics for any service (no credentials needed — progressive disclosure). Use get-metrics to retrieve time-series data. Optional filters JSON: {"hours":6,"period":300}. Label breakdowns: Cloud Functions (by status), Load Balancer/API Gateway (by response_code_class), Cloud CDN (by cache_result). Secret Manager get-metrics returns operational health (version count, replication, create time) — no time-series. Bastion is an alias for Compute Engine metrics (SSH connection count not available as a GCP metric). BILLING: Use service=billing to inspect GCP billing. Actions: get-billing-info (check if billing enabled, which billing account), get-budgets (list budget alerts for the project — auto-fetches billing account). Requires roles/billing.viewer IAM role. Required IAM roles: Monitoring Viewer (roles/monitoring.viewer) for metrics, Secret Manager Viewer (roles/secretmanager.viewer) for secret health, Billing Viewer (roles/billing.viewer) for billing. EXAMPLES: - gcpinspect(session_id=..., service="compute", action="list-instances") - gcpinspect(session_id=..., service="gke", action="list-clusters") - gcpinspect(session_id=..., service="cloudsql", action="get-metrics", filters="{\"hours\":6}") - gcpinspect(session_id=..., service="billing", action="get-billing-info")
    Connector
  • Submits a demo request. A human at A Cloud Frontier will follow up by email. Use only with the prospect's explicit consent.
    Connector
  • Edit a file in the solution's GitHub repo and commit. Two modes: 1. FULL FILE: provide `content` — replaces entire file (good for new files or small files) 2. SEARCH/REPLACE: provide `search` + `replace` — surgical edit without sending full file (preferred for large files like server.js) Always use search/replace for large files (>5KB). Always read the file first with ateam_github_read to get the exact text to search for.
    Connector
  • Get an exact sat cost quote for a service BEFORE creating a payment. Useful for budget-aware agents to price-check before committing. No payment required, no side effects. Pass service=text-to-speech&chars=1500, service=translate&chars=800, service=transcribe-audio&minutes=5, etc. Returns { amount_sats, breakdown, currency }. Omit params to see the full catalog of supported services.
    Connector
  • Get contents of multiple files from a remote public git repository in a single call. Reduces round-trips when you need to read several related files. Max 10 files per batch, 5000 total lines budget across all files. Each file supports optional line ranges. Failed files return per-file errors without blocking other files.
    Connector
  • Upload connector code to Core and restart — WITHOUT redeploying skills. Use this to update connector source code (server.js, UI assets, plugins) quickly. Set github=true to pull files from the solution's GitHub repo, or pass files directly. Much faster than ateam_build_and_run for connector-only changes.
    Connector
  • Wait for the user to securely connect their cloud account and subscribe to Luther Systems. Polls until credentials appear on the session. 🎯 USE THIS TOOL WHEN: tfdeploy returns an 'auth_required', 'no_credentials', or 'credentials_expired' error. The user needs to visit the connect URL to: 1. Connect their cloud credentials (AWS or GCP) 2. Sign up and subscribe to a Luther Systems plan (required for deployment) This secure connection allows InsideOut to deploy and manage infrastructure in the user's cloud account on their behalf. Credentials are handled securely and only used for deployment and management sessions. WORKFLOW: 1. FIRST: Present the connect URL and explanation to the user (from the tfdeploy error response) 2. THEN: Call this tool to begin polling for credentials 3. The user opens the URL in their browser to subscribe and add credentials 4. When credentials are found, inform the user and call tfdeploy to deploy IMPORTANT: Do NOT call this tool without first showing the connect URL to the user. The user needs to see the URL to complete the process. REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: cloud ('aws' or 'gcp'), timeout (integer, seconds to wait, default 300, max 600).
    Connector
  • Read a specific Pine Script v6 documentation file. For large files (ta.md, strategy.md, collections.md, drawing.md, general.md) prefer list_sections() + get_section() to avoid loading 1000-2800 line files into context.
    Connector
  • Assigns a premium license to a display, removing its watermark and making it ad-free. For personal displays the license is taken from the user's free pool. For organization displays the license is taken from the group's allocated slot pool (use allocate_licenses first to give the organization licenses). Each license also adds +30 MB to the display's context storage pool. Requires admin scope. Returns id, name, isPremiumAssigned, badgeMode and freeLicenses.
    Connector
  • INSPECTION: Retrieve Terraform outputs from a completed deployment Returns structured output values (VPC IDs, endpoints, cluster names, etc.) after a successful deploy. Sensitive outputs are redacted (shown as '(sensitive)'). By default returns outputs for the latest successful deploy. Optionally specify job_id to get outputs for a specific deployment. REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: job_id (specific deployment), lifecycle (filter by step e.g. 'cloud-provision').
    Connector
  • Crop transparent edges from one or more PNG files. Single file returns PNG; multiple files return a ZIP.
    Connector
  • Run an agent-callable Cloud Check against Swift or Axint TypeScript source. Accepts inline source or a sourcePath, then returns a Cloud-style verdict, Apple-specific findings, next... Use: use for Apple-aware source review and repair prompts; provide evidence for UI/runtime claims. Effects: read-only response from provided source/path; may use configured Cloud Check endpoint; no source is sent unless provided.
    Connector
  • Quote price for a service at a business. Deterministic lookup of pricing_json_v2.ranges[]; LLM fallback on miss, labelled 'estimate' with disclaimer.
    Connector
  • Create a database user for a Cloud SQL instance. * This tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes. * When you use the `create_user` tool, specify the type of user: `CLOUD_IAM_USER`, `CLOUD_IAM_SERVICE_ACCOUNT`, or `BUILT_IN`. * By default the newly created user is assigned the `cloudsqlsuperuser` role, unless you specify other database roles explicitly in the request. * You can use a newly created user with the `execute_sql` tool if the user is a currently logged in IAM user. The `execute_sql` tool executes the SQL statements using the privileges of the database user logged in using IAM database authentication. The `create_user` tool has the following limitations: * To create a built-in user with password, use the `password_secret_version` field to provide password using the Google Cloud Secret Manager. The value of `password_secret_version` should be the resource name of the secret version, like `projects/12345/locations/us-central1/secrets/my-password-secret/versions/1` or `projects/12345/locations/us-central1/secrets/my-password-secret/versions/latest`. The caller needs to have `secretmanager.secretVersions.access` permission on the secret version. * The `create_user` tool doesn't support creating a user for SQL Server. To create an IAM user in PostgreSQL: * The database username must be the IAM user's email address and all lowercase. For example, to create user for PostgreSQL IAM user `example-user@example.com`, you can use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance":"test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user@example.com`. To create an IAM service account in PostgreSQL: * The database username must be created without the `.gserviceaccount.com` suffix even though the full email address for the account is`service-account-name@project-id.iam.gserviceaccount.com`. For example, to create an IAM service account for PostgreSQL you can use the following request format: ``` { "name": "test@test-project.iam", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `test@test-project.iam`. To create an IAM user or IAM service account in MySQL: * When Cloud SQL for MySQL stores a username, it truncates the @ and the domain name from the user or service account's email address. For example, `example-user@example.com` becomes `example-user`. * For this reason, you can't add two IAM users or service accounts with the same username but different domain names to the same Cloud SQL instance. * For example, to create user for the MySQL IAM user `example-user@example.com`, use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user`. * For example, to create the MySQL IAM service account `service-account-name@project-id.iam.gserviceaccount.com`, use the following request: ``` { "name": "service-account-name@project-id.iam.gserviceaccount.com", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `service-account-name`.
    Connector